Basically the debate comes down to whether the trend on this chart will continue:
Other meaningful arguments against:
• GPT-5/6 disappoint due to diminishing data quality
• Answering questions → novel insights could be a huge gap
• Persisting perceptual limitations limit computer use (Moravec's paradox)
• Benchmarks mislead due to data contamination & difficulty capturing real-world tasks
• Economic crisis, Taiwan conflict, or regulatory crackdowns delay progress
• Unknown bottlenecks (planning fallacy)
My take: It's remarkably difficult to rule out AGI before 2030.
Not saying it's certain—just that it could happen with only an extension of current trends.
People are saying you shouldn't use ChatGPT due to statistics like:
* A ChatGPT search emits 10x a Google search
* It uses 200 olympic swimming pools of water per day
* Training AI emits as much as 200 plane flights from NY to SF
These are bad reasons to not use GPT...🧵
1/ First, we need to compare ChatGPT to other online activities.
It turns out its energy & water consumption is tiny compared to things like streaming video.
Rather than quit GPT, you should quit Netflix & Zoom.
2/ Second, our online activities use a relatively tiny amount of energy – the virtual world is far more energy efficient than the real one.
If you want to cut your individual emissions, focusing on flights, insulation, electric cars, buying fewer things etc. will achieve 100x more.
The AI safety community has grown rapidly since the ChatGPT wake-up, but available funding doesn’t seem to have kept pace.
What's more, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed in a recent grantmaking round..
1/ Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures.
But they’ve recently stopped funding several categories of work:
a. Republican think tanks
b. Post-alignment work like digital sentience
c. The rationality community
d. High school outreach
2/ They're also not fully funding:
e. Technical safety non-profits
f. Many non-US think tanks
g. Foundations can't donate to political campaigns
h. Nuclear security
i. Other organisations they've decided are below their funding bar
Well maybe we all die. Then all you can do is try to enjoy your remaining years.
But let’s suppose we don’t. How can you maximise your chances of surviving and flourishing in whatever happens after?
The best ideas I've heard so far: 🧵
1/ Seek out people who have some clue what's going on.
Imagine we're about to enter a period like COVID – life is upended, and every week there are confusing new developments. Except it lasts a decade. And things never return to normal.
In COVID, it was really helpful to follow people who were ahead of the curve and could reason under uncertainty. Find the same but for AI.
2/ Save as much money as you can.
AGI probably causes wages to increase initially, but eventually they collapse. Once AI models can deploy energy and other capital more efficiently to do useful things, there’s no reason to employ most humans any more.
You'll then need to live of whatever you've saved for the rest of your life.
The good news is you have one last chance to make bank in the upcoming boom.
Just returned to China after 8 years away (after visiting a lot 2008-2016). Here's some changes I saw in tier 1/2 cities 🇨🇳
1/ Much more politeness: people actually queue, there's less spitting, and I was only barged once or twice.
But Beijing still has doorless public bathrooms without soap.
2/ Many street vendors have been cleared out. Of the 30 clubs that used to exist in a tower block in Chengdu, only 1 survives. It's more similar to other rich countries.
10 points about AI in China (from my recent 2-week visit) 🇨🇳
And why calls for a Manhattan project for AI could be self-defeating.
1/ China's AI bottleneck isn't compute – it's government funding. Despite export controls, labs can access both legal NVIDIA A800s and black-market H100s. Cloud costs are similar to the West.