TFW an account with 380k followers tweets out a link to a fucking arXiv paper claiming that "Theory of Mind May Have Spontaneously Emerged in Large Language Models".
That feeling is despair and frustration that researchers at respected institutions would put out such dreck, that it gets so much attention these days, and that so few people seem to be putting any energy into combatting it.
>>
NB: The author of that arXiv (= NOT peer reviewed) paper is the same asshole behind the computer vision gaydar study from a few years ago.
>>
Mixed in with the despair and frustration is also some pleasure/relief at the idea that through Mystery AI Hype Theater 3000 I have an outlet in which I can give this work (and the tweeting about it) the derision it deserves, together with @alexhanna .
The @nytimes, in addition to famously printing lots of transphobic non-sense (see the brilliant call-out at nytletter.com), also decided to print an enormous collection of synthetic (i.e. fake) text today.
>>
@nytimes Why @nytimes and @kevinroose thought their readers would be interested in reading all that fake text is a mystery to me --- but then again (as noted) this is the name publication that thinks its readers benefit from reading transphobic trash, so ¯\_(ツ)_/¯
>>
@nytimes@kevinroose Beyond the act of publishing chatbot (here BingGPT) output as if it were worth anyone's time, there are a few other instances of #AIHype in that piece that I'd like to point out.
Hey journalists -- I know your work is extremely hectic and I get it. I understand that you might make plans for something and then have to pivot to an entirely different topic. That's cool.
BUT:
If you ask an expert for their time same day at a specific time, and they say yes, and then you don't reply, even though said expert has made time for you -- that is NOT OK.
Engaging with the media is actually an additional layer of work over everything else that I do (including the work that builds the expertise that you are interviewing me about). I'm willing to do it because I think it's important.
Started listening to an episode about #ChatGPT on one of my favorite podcasts --- great hosts, usually get great guests and was floored by how awful it was.
>>
Guest blythely claims that large language models learn language like kids to (and also had really uninformed opinions about child language acquisition) ... and that they end up "understanding" language.
>>
The guest also asserted that the robots.txt "soft standard" was an effective way to prevent pages from being crawled (as if all crawlers respect that) & that surely something is already available to do the same to block creative content from getting appropriated as training data.
Step 1: Lead off with AI hype. AI is "profound"!! It helps people "unlock their potential"!!
There is some useful tech that meets the description in these paragraphs. But I don't think anything is clarified by calling machine translation or information extraction "AI".
>>
And then another instance of "standing in awe of scale". The subtext here is it's getting bigger so fast --- look at all of that progress! But progress towards what and measured how?
I suggest you read the whole thing, but some pull quotes:
>>
@danmcquillan "ChatGPT is a part of a reality distortion field that obscures the underlying extractivism and diverts us into asking the wrong questions and worrying about the wrong things." -- @danmcquillan
>>
"The compulsion to show 'balance' by always referring to AI's alleged potential for good should be dropped by acknowledging that the social benefits are still speculative while the harms have been empirically demonstrated."
@mathbabedotorg I do think there's a positive role for shame in this case --- shame here is reinforcing community values against "experimenting" with vulnerable populations without doing due diligence re research ethics.
>>
It seems that part of the #BigData#mathymath#ML paradigm is that people feel entitled to run experiments involving human subjects who haven't had relevant training in research ethics—y'know computer scientists bumbling around thinking they have the solutions to everything. >>