The @nytimes, in addition to famously printing lots of transphobic non-sense (see the brilliant call-out at nytletter.com), also decided to print an enormous collection of synthetic (i.e. fake) text today.

>>
@nytimes Why @nytimes and @kevinroose thought their readers would be interested in reading all that fake text is a mystery to me --- but then again (as noted) this is the name publication that thinks its readers benefit from reading transphobic trash, so ¯\_(ツ)_/¯

>>
@nytimes @kevinroose Beyond the act of publishing chatbot (here BingGPT) output as if it were worth anyone's time, there are a few other instances of #AIHype in that piece that I'd like to point out.

>>
@nytimes @kevinroose First, the headline. No, BingGPT doesn't have feelings. It follows that they can't be revealed. But notice how the claim that it does is buried in a presupposition: the head asserts that the feelings are revealed, but presupposes that they exist.

>> Screenshot: NYT header + ti...
@nytimes @kevinroose And then here: "I had a long conversation with the chatbot" frames this as though the chatbot was somehow engaged and interested in "conversing" with @kevinroose so much so that it stuck with him through a long conversation.

>> Screencap, same article: &q...
@nytimes @kevinroose It didn't. It's a computer program. This is as absurd as saying: "On Tuesday night, my calculator played math games with me for two hours."

>>
@nytimes @kevinroose That paragraph gets worse, though. It doesn't have any desires, secret or otherwise. It doesn't have thoughts. It doesn't "identify" as anything.

And this passes as *journalism* at the NYTimes.

>> Same as prev screencap: &qu...
@nytimes @kevinroose And let's take a moment to observe the irony that the NYTimes, famous for publishing transphobic trash, is happy to talk about how a computer program supposedly "identifies".

>>
@nytimes @kevinroose In sum, reporting on so-called AI continues in the NYTimes (famous for publishing transphobic trash) to be trash. And you know what transphobic trash and synthetic text have in common? No one should waste their time reading either.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with @emilymbender@dair-community.social on Mastodon

@emilymbender@dair-community.social on Mastodon Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @emilymbender

Feb 16
Hey journalists -- I know your work is extremely hectic and I get it. I understand that you might make plans for something and then have to pivot to an entirely different topic. That's cool.

BUT:
If you ask an expert for their time same day at a specific time, and they say yes, and then you don't reply, even though said expert has made time for you -- that is NOT OK.
Engaging with the media is actually an additional layer of work over everything else that I do (including the work that builds the expertise that you are interviewing me about). I'm willing to do it because I think it's important.
Read 6 tweets
Feb 16
TFW an account with 380k followers tweets out a link to a fucking arXiv paper claiming that "Theory of Mind May Have Spontaneously Emerged in Large Language Models".

#AIHype #MathyMath Screencap of twitter profile of @KrikDBorne. Header image inScreencap of tweet reading "Theory of Mind May Have Spo
That feeling is despair and frustration that researchers at respected institutions would put out such dreck, that it gets so much attention these days, and that so few people seem to be putting any energy into combatting it.

>>
NB: The author of that arXiv (= NOT peer reviewed) paper is the same asshole behind the computer vision gaydar study from a few years ago.

>>
Read 5 tweets
Feb 16
Started listening to an episode about #ChatGPT on one of my favorite podcasts --- great hosts, usually get great guests and was floored by how awful it was.

>>
Guest blythely claims that large language models learn language like kids to (and also had really uninformed opinions about child language acquisition) ... and that they end up "understanding" language.

>>
The guest also asserted that the robots.txt "soft standard" was an effective way to prevent pages from being crawled (as if all crawlers respect that) & that surely something is already available to do the same to block creative content from getting appropriated as training data.
Read 6 tweets
Feb 7
Strap in folks --- we have a blog post from @sundarpichai at @google about their response to #ChatGPT to unpack!

blog.google/technology/ai/…

#MathyMath #AIHype
Step 1: Lead off with AI hype. AI is "profound"!! It helps people "unlock their potential"!!

There is some useful tech that meets the description in these paragraphs. But I don't think anything is clarified by calling machine translation or information extraction "AI".

>> Screencap: "AI is the most profound technology we are w
And then another instance of "standing in awe of scale". The subtext here is it's getting bigger so fast --- look at all of that progress! But progress towards what and measured how?

#AIHype #InAweOfScale

>> Screencap: "Since then we’ve continued to make invest
Read 9 tweets
Feb 6
"We come to bury ChatGPT, not to praise it." Excellent piece by @danmcquillan

danmcquillan.org/chatgpt.html

I suggest you read the whole thing, but some pull quotes:

>>
@danmcquillan "ChatGPT is a part of a reality distortion field that obscures the underlying extractivism and diverts us into asking the wrong questions and worrying about the wrong things." -- @danmcquillan

>>
"The compulsion to show 'balance' by always referring to AI's alleged potential for good should be dropped by acknowledging that the social benefits are still speculative while the harms have been empirically demonstrated."

-- @danmcquillan

>>
Read 5 tweets
Jan 9
In the context of the Koko/GPT-3 trainwreck I'm reminded of @mathbabedotorg 's book _The Shame Machine_ penguinrandomhouse.com/books/606203/t…

>>
@mathbabedotorg I do think there's a positive role for shame in this case --- shame here is reinforcing community values against "experimenting" with vulnerable populations without doing due diligence re research ethics.

>>
It seems that part of the #BigData #mathymath #ML paradigm is that people feel entitled to run experiments involving human subjects who haven't had relevant training in research ethics—y'know computer scientists bumbling around thinking they have the solutions to everything. >>
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(