Host of the 80,000 Hours Podcast.
Exploring the inviolate sphere of ideas one interview at a time: https://t.co/2YMw00bkIQ
3 subscribers
Apr 23 • 16 tweets • 12 min read
A new legal letter aimed at OpenAI lays out in stark terms the money and power grab OpenAI is trying to trick its board members into accepting — what one analyst calls "the theft of the millennium."
The simple facts of the case are both devastating and darkly hilarious.
I'll explain for your amusement.
The letter 'Not For Private Gain' is written for the relevant Attorneys General and is signed by 3 Nobel Prize winners among dozens of top ML researchers, legal experts, economists, ex-OpenAI staff and civil society groups. (I'll link below.)
It says that OpenAI's attempt to restructure as a for-profit is simply totally illegal, like you might naively expect.
It then asks the Attorneys General (AGs) to take some extreme measures I've never seen discussed before. Here's how they build up to their radical demands.
For 9 years OpenAI and its founders went on ad nauseam about how non-profit control was essential to:
1. Prevent a few people concentrating immense power 2. Ensure the benefits of artificial general intelligence (AGI) were shared with all humanity 3. Avoid the incentive to risk other people's lives to get even richer
They told us these commitments were legally binding and inescapable. They weren't in it for the money or the power. We could trust them.
"The goal isn't to build AGI, it's to make sure AGI benefits humanity" said OpenAI President Greg Brockman.
And indeed, OpenAI’s charitable purpose, which its board is legally obligated to pursue, is to “ensure that artificial general intelligence benefits all of humanity” rather than advancing “the private gain of any person.”
100s of top researchers chose to work for OpenAI at below-market salaries, in part motivated by this idealism. It was core to OpenAI's recruitment and PR strategy.
Now along comes 2024. That idealism has paid off. OpenAI is one of the world's hottest companies. The money is rolling in.
But now suddenly we're told the setup under which they became one of the fastest-growing startups in history, the setup that was supposedly totally essential and distinguished them from their rivals, and the protections that made it possible for us to trust them, ALL HAVE TO GO ASAP:
1. The non-profit's (and therefore humanity at large’s) right to super-profits, should they make tens of trillions? Gone. (Guess where that money will go now!)
2. The non-profit’s ownership of AGI, and ability to influence how it’s actually used once it’s built? Gone.
3. The non-profit's ability (and legal duty) to object if OpenAI is doing outrageous things that harm humanity? Gone.
4. A commitment to assist another AGI project if necessary to avoid a harmful arms race, or if joining forces would help the US beat China? Gone.
5. Majority board control by people who don't have a huge personal financial stake in OpenAI? Gone.
6. The ability of the courts or Attorneys General to object if they betray their stated charitable purpose of benefitting humanity? Gone, gone, gone!
Screenshotting from the letter:
(I'll do a new tweet after each image so they appear right.) 1/
What could possibly justify this astonishing betrayal of the public's trust, and all the legal and moral commitments they made over nearly a decade, while portraying themselves as really a charity? On their story it boils down to one thing:
They want to fundraise more money.
$60 billion or however much they've managed isn't enough, OpenAI wants multiple hundreds of billions — and supposedly funders won't invest if those protections are in place.
But wait! Before we even ask if that's true... is giving OpenAI's business fundraising a boost, a charitable pursuit that ensures "AGI benefits all humanity"?
Until now they've always denied that developing AGI first was even necessary for their purpose!
But today they're trying to slip through the idea that "ensure AGI benefits all of humanity" is actually the same purpose as "ensure OpenAI develops AGI first, before Anthropic or Google or whoever else."
Why would OpenAI winning the race to AGI be the best way for the public to benefit? No explicit argument is offered, mostly they just hope nobody will notice the conflation. 2/
Apr 11 • 5 tweets • 2 min read
Seriously staggering stuff here. 1/ 2/ Uncertainty at record highs, even greater than during the GFC or COVID.
Mar 5 • 16 tweets • 5 min read
People are sleeping on huge news in the Musk vs OpenAI case today.
The judge finds that if Musk's donation gives him legal standing she thinks it's very likely that she'd want to block their entire $100 billion non-profit to for-profit conversion!
I deep dive below. 1/
Musk is trying to stop the OpenAI business effectively converting from a non-profit to for-profit.
To do that he needs to prove both that he is being wronged in a way that allows him to bring a case to the court ('legal standing'), AND that the conversion to a for-profit is unacceptable, breaching the trust created when OpenAI accepted his donations. 2/
Sep 5, 2023 • 5 tweets • 1 min read
I'd always wondered why other species don't have a sense of disgust like humans, in order to avoid picking up pathogens and getting sick.
'Plagues upon the Earth' gave me my answer: humans suffer 10-100x as many pathogens as wild animals.
So hygiene is way more imp for humans.
Humans suffer among the highest pathogen burdens of any species because we started congregating in cities, forming pathogen communities large enough for illnesses to circulate indefinitely.
Apr 17, 2023 • 24 tweets • 8 min read
Twenty-three reasons not to read newspapers and online 'news' sources. 🧵
1. News is overwhelmingly not relevant to any decisions you will make. 2. If the planet were four times as big would it be sensible to read four times as much news to keep up with it all?