Recently, major AI industry players (incl. a16z, Meta, & OpenAI’s Greg Brockman) announced >$100M in spending on pro-AI super PACs. This is an attempt to copy a wildly successful strategy from the crypto industry, to intimidate politicians away from pursuing AI regulations.🧵
First, some context. This is not normal. Only one industry has ever spent this much on election spending - the crypto industry spent similar sums in 2024 through the super PAC Fairshake. (The only super PACs that spend more are partisan & associated with one party/candidate.)
In case you’re not that cued in to US politics, Fairshake has basically unparalleled influence across the political spectrum within Congress. Their story is instructive, as the pro-AI super PACs are being funded & staffed by many of the key figures behind Fairshake.
A few years ago, crypto had basically zero influence in Congress, w/ many members of Congress in favor of heavily regulating or even outright restricting crypto. After >$100M of spending in the 2024 elections, Fairshake has now achieved approximate political dominance
In 2024, Fairshake’s spending made up the majority of the total spending in some races. In a handful of races, Fairshake’s support was seen as potentially decisive for the election outcome (eg, defeating anti-crypto Senator Sherrod Brown and Senate-hopeful Katie Porter).
The rest of Congress got the message. Members of Congress who had previously been aggressively anti-crypto became much more muted on the issue. Today, there basically aren’t any anti-crypto members of Congress left, in either party.
My understanding is politicians are advised that crypto is the single most important industry to avoid pissing off. The AI industry is now entering the same tier of influence. From what I’m hearing, politicians around the country are already asking consultants how to be “pro-AI”
Notably the main recently announced pro-AI super PAC (Leading the Future) is gearing up to take a similar approach to Fairshake. In addition to the overarching strategy, Leading the Future will have some of the same key funders (eg a16z) as well as staff & advisors as Fairshake
Unless something changes, we should expect the AI industry will achieve similar political dominance as crypto. This would mean freezing the progress of AI regulatory proposals in Congress, w/ most elected officials becoming nervous to even criticize the industry.
Now, it’s true that politicians sometimes vote against the preferences of donors, especially when their constituents have other preferences. But AI, like crypto, is a relatively low salience issue, where donor preference would be expected to be more powerful than voter preference
If an issue is super high salience to voters, such that many will actually change their votes based on it (eg immigration, abortion, maybe climate change in a Dem primary), then politicians will be wise to align with voters, even if that irritates their donors…
But IF an issue is lower salience to voters, such that voters rarely change their vote over it (eg crypto, AI at least for now), AND political donors care a lot about the issue (and are savvy), THEN the politically wise thing is for politicians to prioritize donor preferences.
Crucially, savvy political donors don’t make their political ads about their issue if it’s a low salience issue (or if their position is unpopular). Fairshake, for instance, does not make political ads about cryptocurrency. They recognize that voters don’t care about crypto...
Instead, they spend money on ads calculated to inflict maximum damage on their opponents (or to maximally boost their preferred candidates) based on issues that voters do care about, such as immigration, inflation, and healthcare
It doesn’t matter to the political calculus if the public disagrees w/ what donors want if the public isn’t changing their votes based on it. The political incentives push toward chasing donor money, as money allows for running ads which (somewhat) help with winning elections
And even if a handful of politicians are willing to support AI policy & risk industry spending against them, having just a few champions for AI policy won’t allow for passing legislation as long as the clear majority in Congress will vote against it.
And congressional leadership, which effectively has a veto on all legislation, has similar incentives - they want to bring in lots of donor money for close races to help their party caucus (eg Senate Dems) win a majority. This creates more veto points on AI policy.
Normally, this is all kept in check by politicians sometimes being willing to spend political capital on what they want or what their staff wants. But crypto realized they could simply turn the dial up to 11. And AI interests just started running the same playbook.
I still expect some AI legislative negotiation to occur on the margins, or where industry is fine with it. But the AI industry may now effectively have a veto on almost all AI legislation, & previous battles (eg the moratorium) may be refought against a much stronger industry.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
My sense is there’s generally a power law between “inputs” and “outputs” to technological progress. In this context, that manifests as “exponential increases in inputs over time yields smooth exponential increase in time horizons over time” (ie straight line on semi-log plot)
🧵
Why should there be a power law? We actually see this sort of dynamic come up all the time in technological progress - from experience curve effects (think declining PV prices) to GDP growth to efficiency improvements in various AI domains over time to AI scaling laws
And there are theoretical reasons to expect a power law, too. If ideas get harder to find over time, exponential inputs are needed for “consistent” progress. If each idea provides some proportionate improvement, then “consistent” progress cashes out as exponential growth.
Imho this point is overstated. First off, algorithmic efficiency improvements have been large (substantial fraction of the compute scale up factor) and can still allow for effective scale up. Second, the “unhobblings” could take multiple years.
On the first point - Epoch finds that in language models, pretraining algorithmic progress has been around half as impactful as compute scale up. Naively, if compute scale up stopped, progress would slow down by 3x. This is a decent amount, but not enough to say “2030 or bust”
Now, maybe you think without compute scale up, algorithmic progress would be slower. But note that even if scale up of the largest training runs has to stop, algorithmic progress is more dependent on experimental compute, which may hit limits later (tho I’m not sure when tbh)
New report:
“Will AI R&D Automation Cause a Software Intelligence Explosion?”
As AI R&D is automated, AI progress may dramatically accelerate. Skeptics counter that hardware stock can only grow so fast. But what if software advances alone can sustain acceleration? 🧵
If AI R&D is fully automated, there will be a positive feedback loop: AI performs AI R&D -> AI progress -> better AI does AI R&D -> etc.
Empirical evidence suggests this feedback loop could cause an intelligence explosion despite diminishing returns.
Recent advances (eg described by @METR_Evals) suggest AI R&D might be ~fully automated within years.
Imagine AI systems handling the entire AI development cycle – formulating research questions, designing experiments, developing new AI systems, research management, etc.
I created graphs based on the AI X-risk survey results from Zach Stein-Perlman, @benwr, & @KatjaGrace of @AIImpacts. These figures illustrate the distribution of survey responses. (Note, I rounded responses to the nearest percent, & one response of "<1%" was rounded down to 0%.)
@benwr @KatjaGrace @AIImpacts Here's the graph for the other wording:
@benwr @KatjaGrace @AIImpacts And here's the combined results:
When list of lethalities and/or death with dignity first came out (honestly forget which one), my initial reaction was irritation. I felt like the argument could have been phrased differently, without being so angry, and I worried about a backlash. But pretty quickly...
my reaction reversed. My sense is people ended up taking the piece as a wakeup call to focus a bit more on important parts of the problem, and it expanded discourse in helpful directions
I don't know what effects FLI's letter or the TIMEs piece will have, but I don't think it's crazy to imagine something somewhat directionally similar happening in society at large.
"Sure, maybe you can get liberals on board with your government regulation, but you'll never get pro-market conservatives on board"
Pro-market conservatives:
"Yes, maybe you'll get them to do *something* about AI, but it's such a complicated issue that they'll totally misunderstand the issues at play"
"Science-fiction narratives have poisoned the well – everyone will misunderstand the real problems"