Benjamin Todd Profile picture
Trying to understand AGI and what to do about it Founder @80000Hours 🦑
Apr 4 11 tweets 4 min read
AGI by 2027?

I spent weeks writing this new in-depth primer on the best arguments for and against.

Starting with the case for...🧵 Image 1. Company leaders think AGI is 2-5 years away.

They’re probably too optimistic, but shouldn't be totally ignored – they have the most visibility into the next generation of models. Image
Feb 3 15 tweets 3 min read
1/ Most AI risk discussion focuses on sudden takeover by super capable systems.

But when I imagine the future, I see a gradual erosion of human influence in an economy of trillions of AIs.

So I'm glad to see a new paper about those risks🧵 2/ We could soon be in a world with millions of AI agents, growing 10x per year. After 10 years, there's 1000 AIs per person thinking 100x faster.

In that world, competitive pressure means firms are run more & more by AI.
Jan 21 7 tweets 3 min read
People are saying you shouldn't use ChatGPT due to statistics like:

* A ChatGPT search emits 10x a Google search
* It uses 200 olympic swimming pools of water per day
* Training AI emits as much as 200 plane flights from NY to SF

These are bad reasons to not use GPT...🧵 1/ First, we need to compare ChatGPT to other online activities.

It turns out its energy & water consumption is tiny compared to things like streaming video.

Rather than quit GPT, you should quit Netflix & Zoom. Image
Dec 22, 2024 7 tweets 3 min read
The AI safety community has grown rapidly since the ChatGPT wake-up, but available funding doesn’t seem to have kept pace.

What's more, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed in a recent grantmaking round.. 1/ Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures.

But they’ve recently stopped funding several categories of work:

a. Republican think tanks
b. Post-alignment work like digital sentience
c. The rationality community
d. High school outreach
Dec 22, 2024 9 tweets 3 min read
How can you personally prepare for AGI?

Well maybe we all die. Then all you can do is try to enjoy your remaining years.

But let’s suppose we don’t. How can you maximise your chances of surviving and flourishing in whatever happens after?

The best ideas I've heard so far: 🧵 1/ Seek out people who have some clue what's going on.

Imagine we're about to enter a period like COVID – life is upended, and every week there are confusing new developments. Except it lasts a decade. And things never return to normal.

In COVID, it was really helpful to follow people who were ahead of the curve and could reason under uncertainty. Find the same but for AI.
Dec 1, 2024 16 tweets 4 min read
Just returned to China after 8 years away (after visiting a lot 2008-2016). Here's some changes I saw in tier 1/2 cities 🇨🇳 1/ Much more politeness: people actually queue, there's less spitting, and I was only barged once or twice.

But Beijing still has doorless public bathrooms without soap. Image
Nov 29, 2024 13 tweets 2 min read
10 points about AI in China (from my recent 2-week visit) 🇨🇳

And why calls for a Manhattan project for AI could be self-defeating. 1/ China's AI bottleneck isn't compute – it's government funding. Despite export controls, labs can access both legal NVIDIA A800s and black-market H100s. Cloud costs are similar to the West.

ft.com/content/10aacf…
May 11, 2023 11 tweets 3 min read
How much should you do what seems right to you, even if it seems extreme or controversial, vs. moderate your views based on other perspectives?

Moderate too much & you'll never do anything novel or ambitious. And "common sense" has often supported evil things in the past..

A 🧵 Image But acting on a narrow view of what's right can easily be dangerous - since you're most likely wrong - and has led to some of the worst atrocities in history.

So how to how much to moderate vs. go with what seems right to you?
May 11, 2023 4 tweets 2 min read
It's been a wee bit frustrating to watch so many people start to take AI risk seriously after GPT-4..

..when it was clear these capabilities would arrive *eventually*, and before 2020 you could see deep learning had a good shot of producing them soon

arxiv.org/abs/1712.00409 Image Don't make the same mistake again. Imagine a far more advanced GPT-10 is here now. How worried *then* would you be about AI risk?

And how likely are these capabilities to arrive eventually?
Jul 18, 2022 15 tweets 3 min read
How anyone can practice effective altruism:

Effective altruism often gets simplified to specific actions (donate 10%; take one of these jobs), which can be alienating if you can't do these actions.

But it's actually a way of thinking that anyone can apply.

/1
Here's a series of steps for applying effective altruism to your life, which I've been toying with, and keep at the back of my mind when advising people.

It's designed to be applicable no matter which causes you focus on, your background, or degree of altruism.

/2
Jun 27, 2022 8 tweets 3 min read
The metaculus forecasting community seems to think unaligned superintelligence is coming soon...

...but everything will be fine.

First, they forecast a 50% chance general AI arrives by 2038:
metaculus.com/questions/5121…

/1
Then after general AI arrives, they forecast a 75% chance of superintelligence within 7 years (2045):

metaculus.com/questions/4123…

/2
Jun 27, 2022 9 tweets 3 min read
Is AI coming sooner than we thought?

A thread..

(Tldr: a bit)

/1
There have been some amazing recent AI advances:

PaLM exhibited superhuman ability to explain jokes:


DALL-E can understand artistic styles:


Socratic models can combine common sense across domains:


/2
Jun 27, 2022 4 tweets 2 min read
People joke that fusion has been 30 years away for 30 years.

Here's an actual projection from 1976.

It says fusion was possible within 30 years, *if* funding was increased several-fold.

Actually funding decreased – a scenario they called "fusion never"

Despite this..

1/3 Despite little funding, progress towards fusion 1970-2000 was good.

(The 'triple product' is a measure of the conditions that produce fusion. If you get it high enough, you start producing enough energy for a commercial reactor - shown by the dashed lines.)

2/3
Mar 11, 2022 7 tweets 2 min read
The Fabian Society's 10-step playbook for creating democratic socialism ➡️ the welfare state:

1. Be a group of intellectuals
2. Host fun freewheeling fortnightly debates
3. Get really good at debating
4. Give a ton of talks
5. Recruit one of your generation's best writers

/1
6. Produce a ton of pamphlets explaining new ideas in a plainspoken way (with👌design)
7. Have your members enter local government
8. Wait several decades to work their way up
9. Have major influence on British politics.
10. From there influence India, Singapore, Nigeria etc.

/2 Image
Feb 25, 2022 7 tweets 2 min read
If you live in a NATO city, is it time to leave town?

Suppose your life expectancy is 50 years, you're 10x more likely to survive a nuclear war in the countryside

and there's a 0.04% chance of a large nuclear war this year, then...



/1
You'd stand to gain a week of life expectancy by skipping town for a year.

This would only be worth it if your life isn't made 2% worse by relocating.

/2
Feb 25, 2022 14 tweets 6 min read
Recent events are a good reminder that:

just as a major pandemic was a realistic possibility in our lifetimes..

...so is the chance of a great power conflict

and that could lead to nuclear war..

/1
What's the risk of a US-Russia nuclear war? Some estimates:

My colleague did a survey of surveys: 0.4% per year
rethinkpriorities.org/publications/h…

135 forecasters: 10% chance of large nuclear war by 2050.
metaculus.com/questions/3517…

8% chance of WW3 by 2100.
forum.effectivealtruism.org/posts/aSzxoj7i…

/2
Feb 18, 2022 14 tweets 5 min read
What's the most effective way to tackle climate change?

My take on research in effective altruism to date.

It seems that many popular approaches (eg recycling, planting trees) don't obviously work or are expensive ($100+ per tonne CO2), but...

/1
By focusing on more neglected, high-leverage solutions, you can probably reduce CO2 for under $1/tonne, and have 100-times the impact.

How?

/2
Feb 8, 2022 14 tweets 6 min read
1) If you're a software engineer into effective altruism, please seriously consider quitting your regular tech company job and doing something with direct impact.

There are a lot more opportunities these days... 2) How to have a big impact as a software engineer:

1. Join an AI safety team
2. Work in biosecurity
3. Work at an impactful non-profit on tech or data
4. Specialise in information security
5. Data roles in politics
6. Expand into adjacent skills eg research, ops

More info:
Nov 25, 2021 11 tweets 3 min read
1) Utilitarians and deontologists actually agree about most things.

It's fun to debate philosophical puzzles like the trolly problem, but these are unrealistic edge cases.

In real life.. Image 2) Deontologists agree that it's good to help more people.

John Rawls (most influential 20th century non-utilitarian?) said:

> All ethical doctrines worth our attention take consequences into account in judging rightness. One which did not would simply be irrational, crazy
Nov 23, 2021 8 tweets 2 min read
Now there's $50bn committed to effective altruism, lots of small donors feel they can't do much good compared to billionaire mega-donors.

I think this is wrong for a couple of reasons: 1. Although the amount of funding aligned with effective altruism has grown dramatically, the cost-effectiveness of extra donations has only fallen a little.

Within global health, the bar has declined maybe 30%.

You can still do the equivalent of saving a life for ~$4,500.
Nov 23, 2021 4 tweets 1 min read
Four reasons why do-gooders should take bigger risks:

1. Personal goals (eg money, friends) have diminishing value, but impact doesn't.

So when it comes to doing good, low-probability, high-upside bets are more attractive. 2. In the world of doing good, impact is often driven by outliers.

So, it's more important to increase your chances of being an outlier than to make sure you succeed with confidence.

(As long as you've capped the downsides first.)