I guess it's a milestone for "AI" startups when they get their puff-pieces in the media. I want to highlight some obnoxious things about this one, on Cohere. #AIhype ahead...
First off, it's boring. I wouldn't have made it past the first couple of paragraphs, except the reporter had talked to me so I was (increasingly morbidly) curious how my words were being used.
>>
The second paragraph (and several others) is actually the output of their LLM. This is flagged in the subhead and in the third paragraph. I still think it's terrible journalistic practice.
>>
It's not news that these things can generate coherent seeming text. It's not news that that means we'll increasingly have to work to build trust (and distinguish trustworthy news sources). So why would a seemingly reputable news source want to blur that boundary?
>>
Then a whole bunch of boring things about the founders' college years/20s. And then this presentation of what LLMs can supposedly do, listed as "ideas" but if you missed that (because your eyes had glazed over) seeming like claims of what is actually possible.
>>
Those ideas get increasingly less plausible/relevant. Yes, marketers can generate ad copy, and maybe that will help them with ideas. Yes, programmers can generate code, but is it really a time saver in the end? (How much harder is debugging?). >>
But I sure as hell don't want any lawyer working for me using a seq2seq model to "extract" information from contracts. And if the LM isn't being used generatively like that, what's the "advantage" over plain old Ctrl-F? (Or maybe using word embeddings for fuzzy search?)
>>
And "more powerful voice assistants to make life easier" is to vague to mean anything and "replacing clunky keyword searches" sounds suspiciously like the terrible idea @chirag_shah and I take apart here:
After more discussion of the possible applications and who is doing what in this space, the article *finally* gets to ethical concerns: "But the more fundamental question about LLMs has nothing to do with market size or competition. It’s about how to use them responsibly."
>>
Discussing Tay, the journalist pulls a quote from @Abebab and @vinayprabhu (who in turn cite @ruha9 as their inspiration), referring to them only as "a couple of AI researchers" with no link to the source. (rude)
Cohere's stance seems to be: Whelp, it's too big to actually curate/document thoroughly. You're just gonna have to live with that. As if just not building on enormous piles of internet trash were not an option.
The article also covers some "guidelines" that Cohere put out earlier this year (with OpenAI and AI21 Labs) and quotes my skeptical reading of them ... but check out the uncritical presentation of Cohere's response.
>>
For the millionth time: Language models do not understand.
>>
A bit lower down, the journalist again uncritically quotes this #AIhype. No, their language model isn't "figuring out" anything. (And odd to see this, given the point made earlier in the article that Cohere differs from OpenAI in not having "AGI" as its goal.)
>>
The article ends with this bit of nonsense. The journalist is quite correct that LMs have no understanding of what they are stringing together. Given that, in what world are they kind of entity that should be asked their opinion (as a kind of "journalistic fairness" no less)?
>>
Once again, I think we're seeing the work of a journalist who hasn't resisted the urge to be impressed (by some combination of coherent-seeming synthetic text and venture capital interest). I give this one #twomarvins and urge consumers of news everywhere to demand better.
(Tweeting while in flight and it's been pointed out that the link at the top of the thread is the one I had to use through UW libraries to get access. Here's one that doesn't have the UW prefix: theglobeandmail.com/business/rob-m… )
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Today's #AIhype take-down + analysis (first crossposted to both Twitter & Mastodon): an "AI politician". vice.com/en/article/jgp…
/1
Working from the reporting by @chloexiang at @motherboard, it appears that this is some sort of performance art, except that the project is (purports to be?) interacting with the actual Danish political system.
/2
I have no objections to performance art in general, and something that helps the general public grasp the absurdity of claims of "AI" and reframe what these systems should be used for seems valuable.
/3
Hi folks -- time for another #AIhype take down + analysis of how the journalistic coverage relates to the underlying paper. The headline for today's lesson:
At first glance, this headline seems to be claiming that from text messages (whose? accessed how?) an "AI" can detect mental health issues as well as human psychiatrists do (how? based on what data?).
/2
Let's pause to once again note the use of "AI" in this way suggests that "artificial intelligence" is a thing that exists. Always useful to replace that term with "mathy math" or SALAMI for a reality check.
/3
It's good that @wired is covering this and shedding light on the unregulated mess that is the application of chatbots (and other so-called "AI") to mental health services.
@WIRED Surely it must be possible to cover these stories without writing headlines that suggest the "AI" is something that can have agency. No AI is "trying" to do anything.
>>
Furthermore, somehow this article fails to get into the ENORMOUS risks of the surveillance side of this. How are those companies handling user data? Are they ensuring those conversations only stay on the user's local device (hah, that would be nice)? Who are they selling it to?
Much of the #AIhype seems to be cases where people confuse the artifact that results from human cognitive (incl creative) activity with the cognitive activity itself.
Without the incentive to sell their systems (or generate clicks on articles) would these people really believe that e.g. writing a book is just choosing lots of word forms and putting them on the page?
>>
And given the incentive to sell their tech (+ all the world telling them how smart they are, I guess), do they actually believe that now?
No, a machine did not testify before congress. It is irresponsible for @jackclarkSF to claim that it did and for @dannyfortson to repeat that claim, with no distance or skepticism in the Sunday Times.
>>
@jackclarkSF@dannyfortson Here is what the English verb "testify" means (per @MerriamWebster). 3/4 of these are things that a language model can do: it can't swear an oath, it can't speak from personal knowledge or bear witness, and it can't express a personal conviction.
>>
@jackclarkSF@dannyfortson@MerriamWebster Those are all category errors: language models are computer programs designed to model the distribution of word forms in text, and nothing more. They don't have convictions or personal knowledge and aren't the sort of entity that can be bound by an oath.