Gary Marcus Profile picture
“A beacon of clarity”. Spoke at US Senate AI Oversight committee. Founder/CEO Geometric Intelligence (acq. by Uber). Rebooting AI & Taming Silicon Valley.
Campion1581 Profile picture Ken Nickerson Profile picture Bruce Rasa Profile picture Martin Roberts Profile picture 4 subscribed
Dec 29, 2023 8 tweets 2 min read
OpenAI is in a heap of trouble, and it’s not just text.

Long thread why (1/n), based on work with @Rahll Image Earlier this week we learned from the NYT lawsuit that ChatGPT is capable of copying full paragraphs of copyrighted text Image
Mar 29, 2023 5 tweets 3 min read
a big deal: @elonmusk, Y. Bengio, S. Russell, ⁦⁦@tegmark⁩, V. Kraknova, P. Maes, ⁦@Grady_Booch, ⁦@AndrewYang⁩, ⁦@tristanharris⁩ & over 1,000 others, including me, have called for a temporary pause on training systems exceeding GPT-4 futureoflife.org/open-letter/pa… 🙏 @FLIxrisk
Jan 8, 2023 6 tweets 4 min read
This popular defense of #GPT is fundamentally misguided.

Its main claim is that ChatGPT has substantially solved an earlier set of problems, and that’s just not true.

Consider examples like these just *from the last 48 hours*, all reinforcing my fundamental points.

[Thread] Everything I described on @ezra klein remains a problem.
👉ChatGPT continues to hallucinate
👉It continues to present untruths with (false) authority
👉It continues to create fake references to support its claims
👉As before, such output can easily fool humans, posing risks
[2/6]
Dec 6, 2022 8 tweets 4 min read
Tbh @nytimes @kevinroose this seems quite one-sided
👉you scarcely mention the risk for massive amounts of misinformation
👉“commendable steps to avoid the kinds of racist, sexist and offensive outputs” misses eg awful

🧵1/3 👉 you are too generous in presuming that long-standing “loopholes” like hallucinations, bias, and bizarre errors will “almost certainly be closed”, when the field has struggled w them for so long.

👉 no apparent effort to ask skeptics (like myself, Bender, @Abebab, etc)

2/3
Nov 22, 2022 8 tweets 2 min read
*Fabulous* question: How come smart assistants have virtually no ability to converse, despite all the spectacular progress with large language models?

Thread (and substack essay), inspired by a reader question from @___merc___ 1. LLMs are inherently unreliable. If Alexa were to make frequent errors, people would stop using it. Amazon would rather you trust Alexa for timers and music than have a system with much broader scope that you stop using.
Nov 9, 2022 4 tweets 1 min read
2. The problem is about to get much, much worse. Knockoffs of GPT-3 are getting cheaper and cheaper, which means that the cost of generating misinformation is going to zero, and the quantity of misinformation is going to rise—probably exponentially. 3. As the pace of machine-generated misinformation picks up, Twitter’s existing effort, Community Notes (aka Birdwatch), which is mostly done manually, by humans, is going to get left in the dust.
Sep 25, 2022 9 tweets 3 min read
Marcus 2012: “To paraphrase an old parable, [deep learning] is a better ladder; but a better ladder doesn’t necessarily get you to the moon”

@YLeCun, 2022, “Okay, we built this ladder, but we want to go to the moon, and there's no way this ladder is going to get us there”

🧵1/9 Marcus, 2018: “I present ten concerns for deep learning, and suggest that deep learning must be supplemented by other techniques if we are to reach artificial general intelligence.”
LeCun 2022: Today's AI approaches will never lead to true intelligence
Aug 24, 2022 8 tweets 2 min read
Just watched Noam Chomsky give a fascinating and up-to-the minute talk on deep learning, science, and the nature of human language.

I loved the first half and find myself deeply skeptical of the second

A 🧵summarizing what he said, and my own take. Chomsky’s key claims were two:

1. What he wants to understand is *why human language is the way that it is*. In his view, large language models have told us nothing about this scientific question (but are fine for engineering, e.g speech transcription).

I fully agree.
Jul 3, 2021 9 tweets 3 min read
No, @ylecun, high dimensionality doesn’t erase the critical distinction between interpolation & extrapolation.

Thread on this below, because only way forward for ML is to confront it directly.

To deny it would invite yet another decade of AI we can't trust. (1/9) Let's start with a simple example drawn from my 2001 book The Algebraic Mind, that anyone can try at home: (2/9)
Apr 19, 2021 6 tweets 1 min read
Sure, it is easier to swap algorithms than to change human minds, but the rest of this @pmdomingos argument is flawed.

Here's a mini-thread on why.
(1/6) Current algorithms *do* perpetuate historical biases, and have done so repeatedly, as @LatanyaSweeney, @timnitGebru, @jovialjoy, @mathbabedotorg & others have shown over and over.

These failings are not a coincidence; it's inherent in how current algorithms are built.

2/6