The authors look deep into a use case for text that is ungrounded in either the world or any commitment what's being communicated but nonetheless fluent, apparently coherent, and of a specified style. You know, exactly #GPT3's specialty.
2/n
What's that use case? The kind of text needed, and apparently needed in quantity, for discussion boards whose purpose is recruitment and entrenchment in extremist ideologies.
3/n
And guess what? They find that #GPT3's trick of "few shot" training is definitely up to this challenge.
4/n
I don’t think GPT-3 could produce text written from the point of view of a conspiracy theorist if it didn’t have such texts among it’s training data. But, in the spirit of healthy skepticism, if someone wants to explain how it could, I’m curious about your theories. #NLProc
5/n
The next question then, is: how much such data does it need? Are we seeing a reflection of lots of this garbage getting sucked into the maw of the data-hungry algorithm? Or does it only take a little?
6/n
And if it only takes a little, that’s actually much worse, because it’s much harder to design processes that can filter out tiny amounts of this. E.g. would examples quoted in serious articles discussing the threat of online fora like this be enough?
7/n
My take away 1: ML systems that rely on datasets too large to actually examine are inherently unsafe. (Quote previous tweet on this.)
My take away 2: This paper shows the immense value of interdisciplinary perspectives in evaluating the potential risks of technology.
9/9
p.s. The paper goes talk about #GPT3 having "knowledge" of various conspiracy theories. I think this is a category error, but it does not detract from the point the paper is making. For more on why, though, see aclweb.org/anthology/2020…
As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access.
Why are LLMs bad for search? Because LLMs are nothing more than statistical models of the distribution of word forms in text, set up to output plausible-sounding sequences of words.
Either it's a version of ChatGPT OR it's a search system where people can find the actual sources of the information. Both of those things can't be true at the same time. /2
Also: the output of "generative AI", synthetic text, is NOT information. So, UK friends, if your government is actually using it to respond to freedom of information requests, they are presumably violating their own laws about freedom of information requests. /3
It is depressing how often Bender & Koller 2020 is cited incorrectly. My best guess is that ppl writing abt whether or not LLMs 'understand' or 'are agents' have such strongly held beliefs abt what they want to be true that this impedes their ability to understand what we wrote.
Or maybe they aren't actually reading the paper --- just summarizing based on what other people (with similar beliefs) have mistakenly said about the paper.
>>
Today's case in point is a new arXiv posting, "Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs" by Lederman & Mahowald, posted Jan 10, 2024.
A quick thread on #AIhype and other issues in yesterday's Gemini release: 1/
#1 -- What an utter lack of transparency. Researchers form multiple groups, including @mmitchell_ai and @timnitgebru when they were at Google, have been calling for clear and thorough documentation of training data & trained models since 2017. 2/
In Bender & Friedman 2018, we put it like this: /3
With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety" nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (+ some contacts from old hands who know how to handle ultra-rich man-children with god complexes). 🧵1/
As a quick reminder: AI doomerism is also #AIhype. The idea that synthetic text extruding machines are harbingers of AGI that is on the verge of combusting into consciousness and then turning on humanity is unscientific nonsense. 2/
t the same time, it serves to suggest that the software is powerful, even magically so: if the "AI" could take over the world, it must be something amazing. 3/
"[False arrests w/face rec tech] should be at the heart of one of the most urgent contemporary debates: that of artificial intelligence and the dangers it poses. That it is not, and that so few recognise it as significant, shows how warped has become the discussion of AI,"
>>
"We have stumbled into a digital panopticon almost without realising it. Yet to suggest we live in a world shaped by AI is to misplace the problem. There is no machine without a human, and nor is there likely to be."