Breaking: Nobel laureates, law professors and former OpenAI employees release a letter to CA & DE Attorneys General saying OpenAI's for-profit conversion is illegal, and betrays its charter.
The letter details how the founders of OpenAI chose chose nonprofit control to ensure AGI would serve humanity, not shareholders.
Now when Altman says "AGI will probably get developed during this president’s term", control (and uncapped upside) is to be handed to investors, scrapping safeguards Altman told Congress were necessary in 2023.
The nonprofit would surrender its most powerful mission-achieving tool—control of the leading AGI lab—in exchange for an equity stake it already holds.
"Imagine a nonprofit with the mission of ensuring nuclear technology is developed safely and for the benefit of humanity selling its control over the Manhattan Project in 1943 to a for-profit entity so the nonprofit could pursue other charitable initiatives."
People are saying you shouldn't use ChatGPT due to statistics like:
* A ChatGPT search emits 10x a Google search
* It uses 200 olympic swimming pools of water per day
* Training AI emits as much as 200 plane flights from NY to SF
These are bad reasons to not use GPT...🧵
1/ First, we need to compare ChatGPT to other online activities.
It turns out its energy & water consumption is tiny compared to things like streaming video.
Rather than quit GPT, you should quit Netflix & Zoom.
2/ Second, our online activities use a relatively tiny amount of energy – the virtual world is far more energy efficient than the real one.
If you want to cut your individual emissions, focusing on flights, insulation, electric cars, buying fewer things etc. will achieve 100x more.
The AI safety community has grown rapidly since the ChatGPT wake-up, but available funding doesn’t seem to have kept pace.
What's more, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed in a recent grantmaking round..
1/ Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures.
But they’ve recently stopped funding several categories of work:
a. Republican think tanks
b. Post-alignment work like digital sentience
c. The rationality community
d. High school outreach
2/ They're also not fully funding:
e. Technical safety non-profits
f. Many non-US think tanks
g. Foundations can't donate to political campaigns
h. Nuclear security
i. Other organisations they've decided are below their funding bar
Well maybe we all die. Then all you can do is try to enjoy your remaining years.
But let’s suppose we don’t. How can you maximise your chances of surviving and flourishing in whatever happens after?
The best ideas I've heard so far: 🧵
1/ Seek out people who have some clue what's going on.
Imagine we're about to enter a period like COVID – life is upended, and every week there are confusing new developments. Except it lasts a decade. And things never return to normal.
In COVID, it was really helpful to follow people who were ahead of the curve and could reason under uncertainty. Find the same but for AI.
2/ Save as much money as you can.
AGI probably causes wages to increase initially, but eventually they collapse. Once AI models can deploy energy and other capital more efficiently to do useful things, there’s no reason to employ most humans any more.
You'll then need to live of whatever you've saved for the rest of your life.
The good news is you have one last chance to make bank in the upcoming boom.
Just returned to China after 8 years away (after visiting a lot 2008-2016). Here's some changes I saw in tier 1/2 cities 🇨🇳
1/ Much more politeness: people actually queue, there's less spitting, and I was only barged once or twice.
But Beijing still has doorless public bathrooms without soap.
2/ Many street vendors have been cleared out. Of the 30 clubs that used to exist in a tower block in Chengdu, only 1 survives. It's more similar to other rich countries.