Gary Marcus Profile picture
“AI’s leading critic” —@IEEESpectrum. Spoke to US Senate. Scientist. Author of 6 books. Founder/CEO Geometric Intelligence (acq. by Uber).
4 subscribers
Nov 1 5 tweets 1 min read
⚠️⚠️⚠️
I have a PhD in Brain and cognitive science, from MIT.

𝗜𝗻 𝗷𝘂𝘀𝘁 𝘁𝗵𝗲 𝗹𝗮𝘀𝘁 𝟮𝟰 𝗵𝗼𝘂𝗿𝘀, 𝗜 𝗵𝗮𝘃𝗲 𝘀𝗲𝗲𝗻 𝟰 𝘀𝗶𝗴𝗻𝘀 𝗼𝗳 𝗯𝗿𝗮𝗶𝗻 𝗱𝗮𝗺𝗮𝗴𝗲 𝗮𝗻𝗱/𝗼𝗿 𝗱𝗲𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 𝗶𝗻 𝗧𝗿𝘂𝗺𝗽:

- 𝗠𝗼𝘁𝗼𝗿 𝗱𝗲𝗳𝗶𝗰𝗶𝘁: swaying, missing and and stumbling in front of the garbage truck
- 𝗟𝗼𝘀𝘀 𝗼𝗳 𝗰𝗼𝗴𝗲𝗻𝗰𝘆: inappropriate musical pause/swaying, similar to longer incident a few weeks ago
- 𝗗𝗶𝘀𝗶𝗻𝗵𝗶𝗯𝗶𝘁𝗶𝗼𝗻: threatening Liz Cheney with violence
- 𝗔𝗽𝗵𝗮𝘀𝗶𝗮: description of Musk as “good at computer”

Things are degenerating, fast.

𝙒𝙝𝙮 𝙞𝙨𝙣’𝙩 𝙩𝙝𝙚 𝙢𝙚𝙙𝙞𝙖 𝙧𝙚𝙥𝙤𝙧𝙩𝙞𝙣𝙜 𝙤𝙣 𝙩𝙝𝙞𝙨 𝙥𝙖𝙩𝙩𝙚𝙧𝙣? I would very much welcome the thoughts of anyone specializing in neurology. What’s best diagnosis here? Prognosis?
Aug 3 4 tweets 2 min read
I just wrote a great piece for WIRED predicting that the AI bubble will in collapse in 2025, and now I wish I hadn’t.

Clearly, I got the year wrong. It’s going to be days or weeks from now, not months. btw this is not a joke, and here are the first two paragraphs of the piece on why the bubble would collapse soon, which I submitted earlier in the week.

The Generative AI Bubble Will Collapse in 2025

Generative AI took the world by storm in November 2022, with the release of ChatGPT. 100 million people started using it, practically overnight. Sam Altman, the CEO of OpenAI, the company that created ChatGPT, became a household name. And at least half a dozen companies raced OpenAI in effort to build a better system. OpenAI itself raced to outdo “GPT-4”, their flagship model, introduced in March of 2023 with a successor, presumably to be called GPT-5. Virtually every company raced to find ways of adopting ChatGPT (or similar technology, made by other companies) into their business.

There is just one thing: Generative AI, at least we know it know, doesn’t actually work that well, and maybe never will.
Dec 29, 2023 8 tweets 2 min read
OpenAI is in a heap of trouble, and it’s not just text.

Long thread why (1/n), based on work with @Rahll Image Earlier this week we learned from the NYT lawsuit that ChatGPT is capable of copying full paragraphs of copyrighted text Image
Mar 29, 2023 5 tweets 3 min read
a big deal: @elonmusk, Y. Bengio, S. Russell, ⁦⁦@tegmark⁩, V. Kraknova, P. Maes, ⁦@Grady_Booch, ⁦@AndrewYang⁩, ⁦@tristanharris⁩ & over 1,000 others, including me, have called for a temporary pause on training systems exceeding GPT-4 futureoflife.org/open-letter/pa… 🙏 @FLIxrisk
Jan 8, 2023 6 tweets 4 min read
This popular defense of #GPT is fundamentally misguided.

Its main claim is that ChatGPT has substantially solved an earlier set of problems, and that’s just not true.

Consider examples like these just *from the last 48 hours*, all reinforcing my fundamental points.

[Thread] Everything I described on @ezra klein remains a problem.
👉ChatGPT continues to hallucinate
👉It continues to present untruths with (false) authority
👉It continues to create fake references to support its claims
👉As before, such output can easily fool humans, posing risks
[2/6]
Dec 6, 2022 8 tweets 4 min read
Tbh @nytimes @kevinroose this seems quite one-sided
👉you scarcely mention the risk for massive amounts of misinformation
👉“commendable steps to avoid the kinds of racist, sexist and offensive outputs” misses eg awful

🧵1/3 👉 you are too generous in presuming that long-standing “loopholes” like hallucinations, bias, and bizarre errors will “almost certainly be closed”, when the field has struggled w them for so long.

👉 no apparent effort to ask skeptics (like myself, Bender, @Abebab, etc)

2/3
Nov 22, 2022 8 tweets 2 min read
*Fabulous* question: How come smart assistants have virtually no ability to converse, despite all the spectacular progress with large language models?

Thread (and substack essay), inspired by a reader question from @___merc___ 1. LLMs are inherently unreliable. If Alexa were to make frequent errors, people would stop using it. Amazon would rather you trust Alexa for timers and music than have a system with much broader scope that you stop using.
Nov 9, 2022 4 tweets 1 min read
2. The problem is about to get much, much worse. Knockoffs of GPT-3 are getting cheaper and cheaper, which means that the cost of generating misinformation is going to zero, and the quantity of misinformation is going to rise—probably exponentially. 3. As the pace of machine-generated misinformation picks up, Twitter’s existing effort, Community Notes (aka Birdwatch), which is mostly done manually, by humans, is going to get left in the dust.
Sep 25, 2022 9 tweets 3 min read
Marcus 2012: “To paraphrase an old parable, [deep learning] is a better ladder; but a better ladder doesn’t necessarily get you to the moon”

@YLeCun, 2022, “Okay, we built this ladder, but we want to go to the moon, and there's no way this ladder is going to get us there”

🧵1/9 Marcus, 2018: “I present ten concerns for deep learning, and suggest that deep learning must be supplemented by other techniques if we are to reach artificial general intelligence.”
LeCun 2022: Today's AI approaches will never lead to true intelligence
Aug 24, 2022 8 tweets 2 min read
Just watched Noam Chomsky give a fascinating and up-to-the minute talk on deep learning, science, and the nature of human language.

I loved the first half and find myself deeply skeptical of the second

A 🧵summarizing what he said, and my own take. Chomsky’s key claims were two:

1. What he wants to understand is *why human language is the way that it is*. In his view, large language models have told us nothing about this scientific question (but are fine for engineering, e.g speech transcription).

I fully agree.
Jul 3, 2021 9 tweets 3 min read
No, @ylecun, high dimensionality doesn’t erase the critical distinction between interpolation & extrapolation.

Thread on this below, because only way forward for ML is to confront it directly.

To deny it would invite yet another decade of AI we can't trust. (1/9) Let's start with a simple example drawn from my 2001 book The Algebraic Mind, that anyone can try at home: (2/9)
Apr 19, 2021 6 tweets 1 min read
Sure, it is easier to swap algorithms than to change human minds, but the rest of this @pmdomingos argument is flawed.

Here's a mini-thread on why.
(1/6) Current algorithms *do* perpetuate historical biases, and have done so repeatedly, as @LatanyaSweeney, @timnitGebru, @jovialjoy, @mathbabedotorg & others have shown over and over.

These failings are not a coincidence; it's inherent in how current algorithms are built.

2/6