Daron Acemoglu Profile picture
Jun 10 15 tweets 4 min read Twitter logo Read on Twitter
AI myths roundup. In conclusion, I want to summarize my main arguments and explain the overarching viewpoint. Briefly, I wrote these threads because I believe that there are almost “mystical” claims about AI, and especially generative AI (hence my language of myths).
I am convinced that generative AI is a very promising technology — but only if it is used in the correct way. I am also convinced that it is not currently being used in the correct way and the myths that I have argued against are partly responsible for this distorted path.
If we give up the idea that generative AI can create consciousness or human mind-like behaviors or the conceit that we are at the cusp of ultraintelligent machines, the AI discussion can be placed on a more productive grounding.
If we give up the utopia that generative AI will create superabundance and if we are more upfront about its shortcomings (as well as its impressive capabilities), we can have a more productive conversation about what our aspirations should be.
If we admit that AI can be and should be regulated (including slowing down its uncontrolled rollout and not repeating ever again type of hype ChatGPT generated), that would be an important step towards more productive discussion on regulation.
The heart of the matter is that it is possible to have generative AI become a tool for better human decision-making. This is particularly important, because we are in the midst of a trend towards more and more knowledge work, that most likely continue in the decades to come.
Generative AI could provide complementary tools to knowledge workers. These would be creating new tasks (for educators, nurses, creative workers, tradespeople and even blue-collar workers) and providing inputs into better decision-making for knowledge work.
But this is not the direction we are traveling. Rather, the current approach is repeating the same mistakes that technologists and business people made with digital technologies — excessive automation (and ignoring creation of new human tasks) and centralization of information.
This is both because of the vision of the tech leaders (the craze about autonomous machine intelligence and the mistaken view that downplays the value and versatility of human skills) and the industry structure (and oligopoly, morphing into a duopoly for foundation models).
Why the centralization of information and the possible duopoly of Alphabet and Microsoft is so pernicious is explained in @baselinescene and my NYT op-ed: nytimes.com/2023/06/09/opi…
In our book #PowerAndProgress, we also propose several regulatory steps to prevent this situation. But the most important one is to start articulating a
shared aspiration: a future direction of technologies and AI that is more pro-human — empowering workers and citizens.
The regulatory ideas we propose include: (1) digital ad taxes to change the business model of tech platforms; (2) regulation of data use and well-defined property rights over data (including data unions), so that large language models cannot expropriate others’ creative work;
(3) potential breakup of Big Tech and moratorium on their M&A (to diminish their control over the future of technology and their huge social and economic power); (4) leveling the playing field between capital and labor by increasing taxes on capital and reducing payroll taxes;
(5) government subsidies and competitive prizes for using (generative) AI in a more pro-human way, for example for creating new tasks, new work and new ways of decentralizing information; (6) institutional changes to increase worker voice in the direction of technology.
More discussion of how digital technologies were misused and how AI is heading in the same direction, and further justifications for these policies (as well as ways in which they may or may not go wrong) are discussed in our book: #PowerAndProgress, amazon.com/Power-Progress…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Daron Acemoglu

Daron Acemoglu Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @DAcemogluMIT

Jun 9
Bu mesaj dizisinde yapay zekanın niye denetlenemez olduğunu savunan argümanlara karşı görüşler sunuyorum.
Birincisi, yapay zekayı ve özellikle büyük modelleri geliştirmeyi yavaşlatmak mümkün, ve bunu değişik teknolojilerle daha eskiden yapıldığı da belirtme biliriz
İkincisi, bu konuda şu anda bir çok hükümetin ekspertizi yok ama bu geliştirilebilir.
Read 5 tweets
Jun 9
AI myth 5. You cannot regulate AI. Well, I think I don’t need to work very hard to fight against this one. But it is surprising how often one hears this claim. It has three versions, neither of which holds much water in my opinion.
Version 1. AI is so dynamic and ubiquitous that it cannot be regulated. It is claimed that calls for opposing the training of large language models are misguided because they could never be implemented, because if you prevent large companies doing it, others will fill the void.
I don’t see why that should follow. One could have made the same argument about chemical or biological weapons — they can also be developed secretly in people’s backyard. But regulation has generally worked.
Read 11 tweets
Jun 8
AI myth 4. The big benefits myth --- that generative AI advances are creating tremendous social value. This again may be true or false. We just don’t have enough evidence to conclude one way or another, and there are various concerns shedding doubt on the strongest claims.
First, a lot of the “wisdom” of large language models come from the fact that they have expropriated creative data of others. What would ChatGPT would be without Wikipedia? Without digital books? It is impossible to know, but I would guess not much.
Then the informational benefit of ChatGPT should be measured relative to other sources that people could easily access to get information. ChatGPT can be more interactive, but with its current architecture, this comes at the cost of losing references and provenance of information
Read 17 tweets
Jun 7
AI myth 3. Abundance myth. Building on AI myths 1 and 2, a third, and perhaps more pernicious one emerges: automation and human-like performance by AI will bring economic abundance, out of which all or most of society will benefit.
There are many versions of this, going back to IJ Good’s statement that “the first ultraintelligent machine is the last invention that man need ever make provided that the machine is docile enough to tell us how to keep it under control.”
Or futurist and Google technologist Ray Kurzweil’s periodic pronouncements that “singularity” is just around the corner, e.g., amazon.com/Singularity-Ne…. Or less fanciful versions where AI boosts productivity, so that most people can get by without work.
Read 14 tweets
Jun 6
AI myth 2. Path to AGI: Advances in generative AI are taking us towards artificial general intelligence. E.g., most recently by several leading AI experts and entrepreneurs: safe.ai/statement-on-a…. Or recent articles showing “sparks of AGI” from GPT-4: arxiv.org/abs/2303.12712.
Once you think that machine-human mind analogies at the root of AI myth 1 are questionable, AI myth 2 becomes less convincing as well. If there is something quite special about the human mind, even large models of generative AI will not get us there.
But it goes beyond this. Even if AGI were feasible, the idea that the architecture of generative AI — based on predicting next word or words after a certain string — can achieve the highly sophisticated human cognition seems a big stretch.
Read 8 tweets
Jun 5
AI myth 1. Turing hypothesis, about computers and the human mind. Alan Turing made huge breakthroughs in mathematics, including with his analysis of computable functions. His ideas also shaped the way that many people think about the human mind.
A universal Turing machine can compute any computable object. Turing then worked on whether computers (and thus Turing machines) can be intelligent — meaning that they can do the mental steps that humans do.
But he and many others later came to conceive of human mind as a Turing machine, too. If this is so, then machine intelligence — as an intellectual and computer science program — makes sense.
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(