This is a thread about what was right and what was wrong in the recent Senate hearings on AI harms featuring @OpenAI CEO @sama, as well as @IBM's @_ChristinaMont and @GaryMarcus (one of the researchers in this field I respect most).
Altman’s call for regulation and openness about large language models (LLMs) are obviously to be commended. So far as I can understand, there was a true effort to make the technology understandable to lawmakers.
The best part was Altman’s emphasis on the potential labor market harms and issues related to control of data. These remain my most important concerns about the rollout of new AI tech.
But there are also things that worry me about the hearings and the surrounding discussions.
1⃣ The major thing we have to worry about is not that generative AI will create mass unemployment by displacing most workers. It's the inequality it will generate.
It could multiply the types of effects we have seen from other digital automation: lower real wages for affected workers and huge gains for those controlling the technology.
2⃣ This discussion is often coupled with emphasis on big productivity gains from #AI. I worry these gains will be modest as humans are sidelined &automation never turns out to be as capable as imagined.
Of course, those who control technology make a lot of money in the process.
3⃣ The discussion of the control of data did not get into the thorniest issues either.
Was it okay for #OpenAI to use the labor of all of the people who contributed to #Wikipedia and wrote all books now digitized? Would #ChatGPT be really worth anything without this labor?
4⃣ There was no discussion of how centralized control of information is becoming, esp. as more and more people start using large language models.
How do we deal with the fact that a small group of engineers will decide what #ChatGPT is “truth” to be shown to people?
5⃣ The sweeping implications of LLMs for democratic politics were not discussed. They are not just about deep fakes and disinformation.
They are about pacifying citizens into “consumers of centralized, entertaining, and authoritatively-provided information”, which would be the death blow to democracy.
6⃣I applaud Altman for calling for regulation. But what type of regulation? And for whose benefit? Three issues are pertinent here...
1. It looks likely that #Microsoft/#OpenAI and #Google will be near monopolies in the space of LLM s and generative AI. Doesn’t that need to be regulated by antitrust? Coupled with excessive centralization of information, isn’t that a threat to democracy?
2. Would regulation of the next generation foundation models, including startups, make it even harder for anybody to challenge #Microsoft/#OpenAI’s lead?
3. What about slowing down collection of data and training of these models, as the open letter (futureoflife.org/open-letter/pa…) signed by thousands of #AI researchers, entrepreneurs and academics called for?
I was an early signatory to that letter not because I agree with its concerns about existential risk from super intelligent AI or because I believe that six-months pause is enough, but because it is a wake-up call about dangers and it brought a broad coalition of people together.
Finally and relatedly, we should really stop worrying about super intelligent, evil AI. I am impressed by, e.g., GPT-4, but it is NOT intelligent and will not take over the world.
It can make huge damage to inequality, wages and democracy without being super intelligent.
So I end with a quote from by HG Wells (Chapter 1 of our book) who understood the issue in 1895 better than our tech leaders and policymakers understand it today: “[the technology’s] triumph had not been simply a triumph over Nature, but a triumph over Nature and the fellow man”.
Put differently, we shouldn’t be mesmerized by the advances we make in controlling our environment, nature, and information, and forget that these are technologies being used by humans and very often with the explicit purpose of empowering themselves and controlling others.
In just a few minutes, I will join @m_sendhil and @sekreps to further discuss the consequences of AI as part of the President's Council of Advisors on Science and Technology (PCAST). Join us if you are able
This is a wonkish thread about information, prices and decentralization. It touches on topics often discussed in economics, but with a revisionist take that might have relevance beyond economics, in particular about AI and control of information.
Hayek’s argument offers an original and ingenious “computational” critique of central planning. His basic premise is that there is a huge amount of dispersed knowledge in society about a very large number of goods and services (e.g., people’s preferences).
We have our first completely open event, online, with @equitablegrowth tomorrow. People can register here to attend bit.ly/3MBTTAI
It will also be live streamed on YouTube.
It is meaningful to have this event, since a key concept of our book is shared prosperity. We argue that industrialized nations were able to achieve shared prosperity, most notably in the decades that followed WWII. Growth was rapid and wages for all workers increased robustly.
Here is a thread to explain why we wrote it (and why we are excited to share the ideas that are basis of the book).
The book is a corrective against a particular brand of techno-optimism that is commonplace in US tech circles, journalism and academia.
We wrote it because we think this techno-optimism is not just wrong, but also dangerous.
The techno-optimism we have in mind is not unmoored enough to claim that technology is going to create singularity and make everybody fabulously wealthy, healthy and happy (though there are some who claim that).
It's hard not to be disappointed with the outcome. Erdogan close to a victory, even if there will most likely be a runoff. This is worse than most of us expected.
1. To make sense of it, it is important to first recognize that the Turkish electorate has become very nationalistic. The far-right MHP, allied with Erdogan, received 10% of the vote, despite the fact that nationalist votes were split between Erdogan, MHP, Iyi Parti and others.
2. The president and his allies completely controlled TV and print media and used it to fan the flames of nationalism, esp. with allegations of the opposition being in cahoots with Kurdish separatists. Combined with Kilicdaroglu being an Alevi, this may have been effective.