Today's #AIhype take-down + analysis (first crossposted to both Twitter & Mastodon): an "AI politician". vice.com/en/article/jgp…
/1
Working from the reporting by @chloexiang at @motherboard, it appears that this is some sort of performance art, except that the project is (purports to be?) interacting with the actual Danish political system.
/2
I have no objections to performance art in general, and something that helps the general public grasp the absurdity of claims of "AI" and reframe what these systems should be used for seems valuable.
/3
However, the quotes in the article leave me very concerned that the artists either don't really understand or have expectations of the general AI literacy in Denmark that are probably way too high.
/4
The general idea seems to be "train an LM on fringe political opinions and let people add to that training corpus".
/5
Combine that with the claim that the humans in the party are "committed to carrying out their AI-derived platform" and this "art project" appears to be using the very democratic process as its material. Such a move seems disastrously anti-democratic.
/6
Side note: I'm sure Danes will really appreciate random people from "all around the globe" having input into their law-making.
/7
I'd really rather think that there are better ways to think outside the box in terms of policy making than putting fringe policy positions in a text blender (+ inviting people to play with it further) and seeing what comes out.
/8
This is non-sensical and a category error: "AIs" (mathy maths) aren't the kind of entity that can be held accountable. Accountability rests with humans, and anytime someone suggests moving it to machines they are in fact suggesting reducing accountability.
/9
Sorry, this has been tried. It was called Tay and it was a (predictable) disaster. What's missing in terms of "democratizing" "AI" is shared *governance*, not open season on training data.
/10
This paragraph seems inconsistent with the rest of the article. That is, I don't see anything in the rest of the proposals that seems like a good way to "use AI to our benefit."
/11
And this is downright creepy. I thought that "representative democracy" means that the elected representatives represent the people who elected them, not their party and surely not a text synthesis machine.
/12
Finally, I can just tell that some reading this thread are going to reply with remarks abt politicians being thoughtless text synthesizing machines. Don't. You can be disappointed in politicians without dehumanizing them, & without being nihilistic about the whole process.
I guess it's a milestone for "AI" startups when they get their puff-pieces in the media. I want to highlight some obnoxious things about this one, on Cohere. #AIhype ahead...
First off, it's boring. I wouldn't have made it past the first couple of paragraphs, except the reporter had talked to me so I was (increasingly morbidly) curious how my words were being used.
>>
The second paragraph (and several others) is actually the output of their LLM. This is flagged in the subhead and in the third paragraph. I still think it's terrible journalistic practice.
Hi folks -- time for another #AIhype take down + analysis of how the journalistic coverage relates to the underlying paper. The headline for today's lesson:
At first glance, this headline seems to be claiming that from text messages (whose? accessed how?) an "AI" can detect mental health issues as well as human psychiatrists do (how? based on what data?).
/2
Let's pause to once again note the use of "AI" in this way suggests that "artificial intelligence" is a thing that exists. Always useful to replace that term with "mathy math" or SALAMI for a reality check.
/3
It's good that @wired is covering this and shedding light on the unregulated mess that is the application of chatbots (and other so-called "AI") to mental health services.
@WIRED Surely it must be possible to cover these stories without writing headlines that suggest the "AI" is something that can have agency. No AI is "trying" to do anything.
>>
Furthermore, somehow this article fails to get into the ENORMOUS risks of the surveillance side of this. How are those companies handling user data? Are they ensuring those conversations only stay on the user's local device (hah, that would be nice)? Who are they selling it to?
Much of the #AIhype seems to be cases where people confuse the artifact that results from human cognitive (incl creative) activity with the cognitive activity itself.
Without the incentive to sell their systems (or generate clicks on articles) would these people really believe that e.g. writing a book is just choosing lots of word forms and putting them on the page?
>>
And given the incentive to sell their tech (+ all the world telling them how smart they are, I guess), do they actually believe that now?
No, a machine did not testify before congress. It is irresponsible for @jackclarkSF to claim that it did and for @dannyfortson to repeat that claim, with no distance or skepticism in the Sunday Times.
>>
@jackclarkSF@dannyfortson Here is what the English verb "testify" means (per @MerriamWebster). 3/4 of these are things that a language model can do: it can't swear an oath, it can't speak from personal knowledge or bear witness, and it can't express a personal conviction.
>>
@jackclarkSF@dannyfortson@MerriamWebster Those are all category errors: language models are computer programs designed to model the distribution of word forms in text, and nothing more. They don't have convictions or personal knowledge and aren't the sort of entity that can be bound by an oath.