Facebook (sorry: Meta) AI: Check out our "AI" that lets you access all of humanity's knowledge.
Also Facebook AI: Be careful though, it just makes shit up.
This isn't even "they were so busy asking if they could"—but rather they failed to spend 5 minutes asking if they could.
>>
Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, from a social media company. Fortunately, @chirag_shah and I already wrote the paper laying that all out:
And let's reflect for a moment on how they phrased their disclaimer, shall we? "Hallucinate" is a terrible word choice here, suggesting as it does that the language model has *experiences* and *perceives things*.
>>
(And on top of that, it's making light of a symptom of serious mental illness.)
>>
Likewise "LLMs are often Confident". No, they're not. That would require subjective emotion.
But in the strangest possible way. Are they reflecting on the possible harms their technology might engender? No, of course not. They're striving for TRUTH! And thus worried about "bias".
>>
Narrator voice: LMs have no access to "truth", or any kind of "information" beyond information about the distribution of word forms in their training data. And yet, here we are. Again.
>>
This thread went to Mastodon first. I'm not sure how long I'll keep bringing them here, too -- so come find me over there: @emilymbender@dair-community.social
Today's #AIhype take-down + analysis (first crossposted to both Twitter & Mastodon): an "AI politician". vice.com/en/article/jgp…
/1
Working from the reporting by @chloexiang at @motherboard, it appears that this is some sort of performance art, except that the project is (purports to be?) interacting with the actual Danish political system.
/2
I have no objections to performance art in general, and something that helps the general public grasp the absurdity of claims of "AI" and reframe what these systems should be used for seems valuable.
/3
I guess it's a milestone for "AI" startups when they get their puff-pieces in the media. I want to highlight some obnoxious things about this one, on Cohere. #AIhype ahead...
First off, it's boring. I wouldn't have made it past the first couple of paragraphs, except the reporter had talked to me so I was (increasingly morbidly) curious how my words were being used.
>>
The second paragraph (and several others) is actually the output of their LLM. This is flagged in the subhead and in the third paragraph. I still think it's terrible journalistic practice.
Hi folks -- time for another #AIhype take down + analysis of how the journalistic coverage relates to the underlying paper. The headline for today's lesson:
At first glance, this headline seems to be claiming that from text messages (whose? accessed how?) an "AI" can detect mental health issues as well as human psychiatrists do (how? based on what data?).
/2
Let's pause to once again note the use of "AI" in this way suggests that "artificial intelligence" is a thing that exists. Always useful to replace that term with "mathy math" or SALAMI for a reality check.
/3
It's good that @wired is covering this and shedding light on the unregulated mess that is the application of chatbots (and other so-called "AI") to mental health services.
@WIRED Surely it must be possible to cover these stories without writing headlines that suggest the "AI" is something that can have agency. No AI is "trying" to do anything.
>>
Furthermore, somehow this article fails to get into the ENORMOUS risks of the surveillance side of this. How are those companies handling user data? Are they ensuring those conversations only stay on the user's local device (hah, that would be nice)? Who are they selling it to?
Much of the #AIhype seems to be cases where people confuse the artifact that results from human cognitive (incl creative) activity with the cognitive activity itself.
Without the incentive to sell their systems (or generate clicks on articles) would these people really believe that e.g. writing a book is just choosing lots of word forms and putting them on the page?
>>
And given the incentive to sell their tech (+ all the world telling them how smart they are, I guess), do they actually believe that now?