I found the recent @nytimes opinion piece on AI by @harari_yuval@tristanharris@aza very interesting and I agree with some of the overall thrust and points but object to MANY of the important details. So, time for a 🧵 detailing my critiques: nytimes.com/2023/03/24/opi…
Starting with the opening: They analogize a survey of AI experts estimating AI doomsday to airplane engineerings estimating the probability a plane will crash. This is wildly misleading. Flight safety is based on very well-understood physics and mechanics and data, whereas
these AI apocalypse estimates are completely unscientific, just made-up numbers, there's nothing meaningful to support them. And AI experts are biased: they benefit from the impression that AI is more powerful than it is and could easily deceive themselves into believing it.
Airlines don't benefit from telling you flying is risky, they benefit from getting you safely from A to B. With AI we don't have a specific commercial goal like this so value is generated from the aura of omnipotence radiating from AI--and xrisk AI doomsday plays right into this.
When I see that X% percent of "AI experts" believe there's a Y% chance AI will kill us all, sorry but my reaction is yeah that's what they want us to think so we are in awe of their godlike power and trust them to save us. It's not science.
Next point I LOVE: "Drug companies cannot sell people new medicines without first subjecting their products to rigorous safety checks. Biotech labs cannot release new viruses into the public sphere in order to impress shareholders with their wizardry. Likewise, A.I. systems ..."
"... AI systems with the power of GPT-4 and beyond should not be entangled with the lives of billions of people at a pace faster than cultures can safely absorb them." Amen to this!!
But then a highly problematic line comes next:
"A race to dominate the market should not set the speed of deploying humanity’s most consequential technology." I 100% agree in spirit, but hold on: "humanity's most consequential technology"?!? Are you seriously putting chatbots above antibiotics, pasteurization, the internet,
cell phones, smart phones, cars, planes, electricity, the light bulb, ... Chatbots are fun new apps that'll make a lot of tasks more efficient, but claiming they're humanity's most consequential tech is an ENORMOUS assumption to glibly sneak in there.
Maybe you mean all of AI not just GPT? Still, AI wouldn't have much impact on us if we weren't connected to each other on the internet, so why not say instead the internet is the most consequential tech? Or computers? Because that wouldn't make headlines today, whereas AI does.
It is "harder to grasp the exponential speed at which these tools are developing more advanced and powerful capabilities". I sort of agree, but many things that look exponential are actually logistic. For AI, we will face challenging limits, not just a fading Moore's law, also
data: we're already training LLMs now on most human written text, so how do we keep increasing this finite supply? Even if we could, how do we know AI capabilities will continue exponentially and not plateau? We don't. They might, they might not: it's an assumption, not a fact.
Next: "AI’s new mastery of language means it can now hack and manipulate the operating system of civilization. By gaining mastery of language, AI is seizing the master key to civilization." Hmm, some important truth here but also more reckless exaggeration:
GPT4 can speak fluently, so maybe it's mastered language in that sense--but to manipulate us it'd need to measure the impact its words have on us, and it cannot. Donald Trump could see what his words do to his crowds of supporters, what actions they took because of his speeches.
Chatbots are not remotely in a position like that. They produce text, they might accidentally manipulate individuals (like trying to convince @kevinroose to leave his wife) but mastering linguistic fluency is NOT the same as mastering the manipulate power of language.
"What would it mean for humans to live in a world where a large percentage of stories, images, ... are shaped by nonhuman intelligence" --good start, I'm with you -- "which knows how to exploit with superhuman efficiency the weaknesses, biases and addictions of the human mind"?!?
How did we just jump from generative AI to this level of omnipotence? Maybe this is down the future road, but the article seems to be about GPT4 and sequels and these AIs just don't know enough about us to have these superpowers yet. So I agree that this is an enormous risk but
worry the framing here is overly flattering to current AI capabilities and chatbots. Google knows what ads I'm likely to click and what products I'm likely to buy--GPT4 doesn't know what any convos it has with me will do to my real-world actions at all.
"In games like chess, no human can hope to beat a computer. What happens when the same thing occurs in art, politics or religion?"
It is SUCH a leap from a rigid mechanical game like chess to these amorphous real-world settings where winning isn't even defined.
"AI could rapidly eat the whole of human culture — everything we have produced over thousands of years — digest it and begin to gush out a flood of new cultural artifacts. "
Fascinating point, frightening prospect, and unfortunately I do find this plausible:
not necessarily that AI will produce great cultural artistic achievements (it might, it might not), but it's sure likely to produce the kind of consumer-oriented successes that capitalism selects for (ha, sorry, couldn't resist the pop culture jab here).
But this nice sentence
is followed by an utterly silly one: "By 2028, the U.S. presidential race might no longer be run by humans."
Nope.
I mean, sure, campaigns will use data and algos, but Obama did that in 2008; the tools will be fancier in 2028, but come on it'll still be run by humans.
"Humans often don’t have direct access to reality. We are cocooned by culture, experiencing reality through a cultural prism."
I LOVE this point!!!
But then the interpretation of this with AI is slippery:
"That cultural cocoon has hitherto been woven by other humans. What will it be like to experience reality through a prism produced by nonhuman intelligence?"
As the authors point out, it's already colored by social media and much other technology.
I don't see a huge paradigm leap into a nonhuman world--I just see an acceleration and expansion of the role algorithms play in society. I agree with their worries but find them overplaying the role of AI here--I see more of a tech continuum than they do.
Thread to be continued!
"by gaining mastery of language, AI would have all it needs to contain us in a Matrix-like world of illusions, without shooting anyone or implanting any chips in our brains. If any shooting is necessary, AI could make humans pull the trigger, just by telling us the right story."
Somehow autonomy and careful planning of actions has snuck into this mastery of language. GPT4 type apps DO NOT know their impact, plan their actions, they're not aware of the world we live in, so I find this a big stretch. Maybe a risk down the road, but it's a long road, so
again my beef here is conflating superficial fluency of language with some kind of omnipotent understanding of the impact of words and some autonomous desire to leverage that. Let's say likely a valid risk eventually but this ain't just a matter of GPT-style linguistics.
Won't go into details on the parts about social media but I LOVE that part of the article and fully agree and think they expressed it powerfully and beautifully. Please read it!
After describing social media as our first contact with AI, then a strange line: "Large language models are our second contact with AI."
Social media relies heavily on large language models. Look up Facebook's RoBERTa, for instance.
So maybe chatbots rather than LLMs.
"But on what basis should we believe humanity is capable of aligning these new forms of A.I. to our benefit?"
A lot to unpack here but just for now to point out that I see chatbots much like social media algs: they'll be plenty aligned with the profits of the tech giants.
So the problem isn't aligning them, it's deciding whose benefit they'll be aligned with--and the obvious answer is the companies commercializing them. So I totally agree with the problem the authors raise but I don't find it new at all here with chatbot AI.
The following paragraph is so beautifully stated and important that I cannot skip it: "The time to reckon with AI is before our politics, our economy and our daily life become dependent on it. Democracy is a conversation, conversation relies on language, and when language..."
"... itself is hacked, the conversation breaks down, and democracy becomes untenable. If we wait for the chaos to ensue, it will be too late to remedy it."
YES YES YES!! This is so important and I've never seen it stated so clearly and so well! Thank you!!
"When godlike powers are matched with commensurate responsibility and control, we can realize the benefits that A.I. promises. We have summoned an alien intelligence."
No need for "godlike" and "alien intelligence" here; needlessly diluting your important point with hype.
I'm sorry to be so critical as I do think the essay is fascinating and the conclusions of what to do are spot on and extremely important. But all the interwoven hype is dangerously misleading and free advertising for already overly powerful+reckless tech companies.
Fin.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
There's growing calls for AI regulation, incl. from @elonmusk, @miramurati (lead @OpenAI behind ChatGPT), and Rep @tedlieu. But what? Broad AI reg is tricky, so in a new @SciAm piece I suggest measures we could take immediately to help navigate our new chatbot-infused world.🧵
AI is not one thing: it’s a range of techniques w/ a range of applications. Hard to regulate across autonomous weapons, facial recognition, self-driving cars, discriminatory algorithms, economic impacts of automation, and the slim but nonzero chance of catastrophic AI disaster.
Don't let the challenge of broad AI regulation distract us from regulating chatbots ASAP. Chatbots (meaning generative text in any form) are a big corner of AI w/ power to the influence way we see the world, process information, and interact w/ each other.
I just had the most vivid dream I can remember in years; the interpretation seems quite obvious but for context my baby woke up in the middle of the night and I read a bunch of newspaper articles on AI while waiting for him to fall back asleep 😂
Here's what I dreamt:
I was at a hotel lobby with a group of guests I didn't know. We were escorted to a shuttle bus to take us to another building with our rooms, but it was a driverless bus that simply called itself the "AI bus". We found it amusing at first, but then
it tried to take a shortcut by going up a steep hill, but it couldn't make it and started rolling backwards back down. This concerned us all but we couldn't stop it or open the doors to get out. Next it started racing faster to build up enough speed to attempt the hill again.
Everyone is amazed at OpenAI and ChatGPT, but don’t forget: the transformer was invented by Google, the first LLM was Google’s BERT, and Google made an apparently impressive chatbot (LaMDA) before ChatGPT but didn’t release publicly since they didn’t feel it was safe to do so.
Since this tweet went further than I expected, some additional clarifications, corrections, and excellent points raised by others in the comments:
(1) BERT wasn't the "first" LLM (sorry!), GPT-1 and @allen_ai's ELMo came before it (they were roughly concurrent with each other), though going way back from contextual to static embeddings arguably the first LLM was word2vec, which was developed by Google.
Here's the ultimate irony about the EA movement: they just *caused* one of the large-scale harms that they supposedly are trying to circumvent. And this was not a coincidence; if you follow the logic of the movement, you'll see that this was inevitable. Let me explain 🧵 1/7
In a nutshell, EA is about using math to try to maximize the charitable impact one can have. But one quickly realizes this means instead of doing charitable acts, one should just earn as much money as possible then donate. @willmacaskill suggested this to @SBF_FTX early on. 2/7
Mathematically, to maximize one's money, one should do whatever it takes to make more, even if that means ethical breaches: shady business dealings, offshore tax havens, etc. If one can get away with it, one *should* do it, to maximize money and hence charitable impact. 3/7