Did you guys know there's 24-author paper by EAs, for EAs, about how Totalitarianism is absolutely necessary to prevent AI from killing everyone?
Let's go through it together 🧵
This is a beautiful paper. It is beautiful because it is a bunch of people starting from the EA position about existential risk and independently coming to the conclusion that total authoritarianism is necessary.
Since this sounds like an exaggeration, I will quote verbatim, or rather post screenshots verbatim, for most of this thread.
The crux of the argument: AI creates so much innovation that it can’t be controlled top-down. It is technology beyond centralized command.
They predictably call for exactly the kind of regulatory capture most convenient to OpenAI, Deepmind, and other large players.
They list some costs and benefits here. All of the citations about positive examples of AI are demonstrations of AI in the real world, all of the negative citations are just their own hypothetical scenarios constructed with no basis in reality.
Remember how they said earlier that it’s hard to define or prove harm, only the “possibility” of harm? That’s the key to the coming crackdown.
At this point everyone in the audience should be aware that these are not just powerless academics, these are members of think tanks, government agencies, companies, and corporate boards with real power.
Here they list four concerns which I'll rebut one by one.
Re 1: the limiting factor of designing new biological weapons is equipment, safety, and not killing yourself with them. No clue why this obviously false talking point is trodded out by EAs so often.
Re 2: Not just wrong but the complete opposite of the truth. Based on an incorrect understand of the legacy press. fromthenew.world/p/ai-threatens…
Re 3: Cyberattacks resulting from machine learning adoption are real, but far from catastrophic. Think of the argument: people’s ML algorithms will be hacked and become worse than not having the ML algorithm at all? Does anyone believe this circular argument about any technology?
Re 4: The footnote is just linking to their hypotheticals again, no real examples.
I'll give some praise: this paper describes an actual plan that would indeed put AI companies under total government control. It would work if not stopped by legislators, voters, or courts. They are not playing around.
It's interesting to note that they do go to some length to disclaim that they don't want the even harder forms of totalitarianism that characterize "AI Ethics" organizations. They only want to crack down on companies, not users. Which I guess is worth something?
They lay out three obstacles to their plans. If you pause for a moment and read the lines carefully, you will realize they are all synonyms for freedom.
An equivalent reading:
The Unexpected Capabilities Problem: ML is easily used to create innovation.
The Deployment Safety Problem: ML is easy to update and build upon.
The profileration problem: People have the first amendment right to share ML.
They made a diagram of these problems and how they plan to deal with them, which looks like a oppositional chart made by Roosevelt-style trust busters to point out how a monopoly plans to controls everything.
In case this wasn’t clear they also directly say they want regulatory capture in the next paragraph:
Did they think people wouldn't realize that the "expert" are large companies (OpenAI, Google, etc.) and that they're directly advocating for them to collude with government?
I said earlier that their plans was not as totalitarian as the AI Ethics scam artists, but they’re willing to work with them.
Other than that this is classic entrenchment of political constituencies – paying off and subsidizing people who are ideologically loyal. Excellent use of taxpayer dollars.
Any endorsement of the EU crackdown strategy should be immediately rejected by any remotely sane American politician.
I have an article in @PirateWires criticizing the broader framework of crackdowns as China-lite and in some cases China-mega, which has devastated the European tech sector. piratewires.com/p/the-costs-of…
My typical style involves more commentary and forecasting about the implications of something like this, but not today. Sometimes there is no stronger evidence than to let the guilty speak for himself.
I guess I should thank EAs for saying the quiet part out loud: “Totalitarian crackdowns are necessary, we want to unify all companies through regulatory capture, freedom is the enemy and must be eliminated” all in one paper.
I expect one category of reply because I've already encountered it in real life: "But Brian, the article is correct, if we don't do totalitarianism we'll all die!"
A day before its release, I review @RichardHanania's The Origins of Woke. It's a meticulous history of how the Civil Rights regime came into being. Equally as important, it's a blueprint for a conservatism which wins. 🧵(1/29)
Many conservative conservations eventually come to the doomer question: are conservative losses because conservatives just made poor choices, or because the rules of the game are rigged against them? (2/29)
@realDonaldTrump , Billboard’s number one singer Oliver Anthony, and top-rated X streamer @TuckerCarlson are proponents of the doomer position: we need revolution. (3/29)
So what was this allegory about? It's about how anti-tech people have been making the same arguments about everything, for centuries. And it's about why apocalypse matters. 🧵(1/13) https://t.co/j6TnUTflWk
Despite the medieval context, many people pattern matched this story to capitalism. Why? It's a parallel to the cold war! (2/13)
The Communist regime hated capitalism, and with it the freedom and prosperity it brought. It isn't just a matter of having air conditioning and a high quality of life, it's a matter of survival.
AI is here. It matters. It outperforms 80% of humans in a majority of tests. It's set to create economic waves. (2/23)
But how much more will it be able to do? Is AGI on the horizon? @open_phil defines AGI as "AI that can quickly and affordably be trained to perform nearly all economically and strategically valuable tasks at roughly human cost or less." (3/23)
Not saying ICPC is everything, but the same pattern is everywhere. If anything, China is more dominant since Eastern europeans are weirdly good at computer science.