The @nytimes, in addition to famously printing lots of transphobic non-sense (see the brilliant call-out at nytletter.com), also decided to print an enormous collection of synthetic (i.e. fake) text today.
>>
@nytimes Why @nytimes and @kevinroose thought their readers would be interested in reading all that fake text is a mystery to me --- but then again (as noted) this is the name publication that thinks its readers benefit from reading transphobic trash, so ¯\_(ツ)_/¯
>>
@nytimes@kevinroose Beyond the act of publishing chatbot (here BingGPT) output as if it were worth anyone's time, there are a few other instances of #AIHype in that piece that I'd like to point out.
>>
@nytimes@kevinroose First, the headline. No, BingGPT doesn't have feelings. It follows that they can't be revealed. But notice how the claim that it does is buried in a presupposition: the head asserts that the feelings are revealed, but presupposes that they exist.
>>
@nytimes@kevinroose And then here: "I had a long conversation with the chatbot" frames this as though the chatbot was somehow engaged and interested in "conversing" with @kevinroose so much so that it stuck with him through a long conversation.
>>
@nytimes@kevinroose It didn't. It's a computer program. This is as absurd as saying: "On Tuesday night, my calculator played math games with me for two hours."
>>
@nytimes@kevinroose That paragraph gets worse, though. It doesn't have any desires, secret or otherwise. It doesn't have thoughts. It doesn't "identify" as anything.
And this passes as *journalism* at the NYTimes.
>>
@nytimes@kevinroose And let's take a moment to observe the irony that the NYTimes, famous for publishing transphobic trash, is happy to talk about how a computer program supposedly "identifies".
>>
@nytimes@kevinroose In sum, reporting on so-called AI continues in the NYTimes (famous for publishing transphobic trash) to be trash. And you know what transphobic trash and synthetic text have in common? No one should waste their time reading either.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Hey journalists -- I know your work is extremely hectic and I get it. I understand that you might make plans for something and then have to pivot to an entirely different topic. That's cool.
BUT:
If you ask an expert for their time same day at a specific time, and they say yes, and then you don't reply, even though said expert has made time for you -- that is NOT OK.
Engaging with the media is actually an additional layer of work over everything else that I do (including the work that builds the expertise that you are interviewing me about). I'm willing to do it because I think it's important.
TFW an account with 380k followers tweets out a link to a fucking arXiv paper claiming that "Theory of Mind May Have Spontaneously Emerged in Large Language Models".
That feeling is despair and frustration that researchers at respected institutions would put out such dreck, that it gets so much attention these days, and that so few people seem to be putting any energy into combatting it.
>>
NB: The author of that arXiv (= NOT peer reviewed) paper is the same asshole behind the computer vision gaydar study from a few years ago.
Started listening to an episode about #ChatGPT on one of my favorite podcasts --- great hosts, usually get great guests and was floored by how awful it was.
>>
Guest blythely claims that large language models learn language like kids to (and also had really uninformed opinions about child language acquisition) ... and that they end up "understanding" language.
>>
The guest also asserted that the robots.txt "soft standard" was an effective way to prevent pages from being crawled (as if all crawlers respect that) & that surely something is already available to do the same to block creative content from getting appropriated as training data.
Step 1: Lead off with AI hype. AI is "profound"!! It helps people "unlock their potential"!!
There is some useful tech that meets the description in these paragraphs. But I don't think anything is clarified by calling machine translation or information extraction "AI".
>>
And then another instance of "standing in awe of scale". The subtext here is it's getting bigger so fast --- look at all of that progress! But progress towards what and measured how?
I suggest you read the whole thing, but some pull quotes:
>>
@danmcquillan "ChatGPT is a part of a reality distortion field that obscures the underlying extractivism and diverts us into asking the wrong questions and worrying about the wrong things." -- @danmcquillan
>>
"The compulsion to show 'balance' by always referring to AI's alleged potential for good should be dropped by acknowledging that the social benefits are still speculative while the harms have been empirically demonstrated."
@mathbabedotorg I do think there's a positive role for shame in this case --- shame here is reinforcing community values against "experimenting" with vulnerable populations without doing due diligence re research ethics.
>>
It seems that part of the #BigData#mathymath#ML paradigm is that people feel entitled to run experiments involving human subjects who haven't had relevant training in research ethics—y'know computer scientists bumbling around thinking they have the solutions to everything. >>