Encore post today: Want to win the AGI race? Solve alignment.
Look, I really don't want Xi Jinping Thought to rule the world.
But, practically, society cares about safety, a lot. To deploy your AGI systems, people will demand confidence that it's safe.
Don't underestimate the endogenous societal response. Things will get crazy, and people will pay attention.
AI risk/AI safety is already going mainstream. People have been primed by sci-fi; all the CEOs have secretly believed in it for years.
Mar 29, 2023 • 7 tweets • 4 min read
New post: Nobody's on the ball on AGI alignment
With all the talk about AI risk, you'd think there's a crack team on it. There's not.
- There's far fewer people on it than you might think
- The research is very much not on track
(But it's a solvable problem, if we tried!)
There's ~300 alignment researchers in the world (counting generously).
There were 30,000 attendees at ICML alone (a conference for ML researchers).
OpenAI has ~7 people on its scalable alignment team.
There just aren't many great researchers out there focused on this.
Mar 29, 2023 • 5 tweets • 2 min read
Fwiw, I think this is a bad idea.
Models aren’t actually dangerous yet. Risks “crying wolf.” Keep the powder dry for if/when we face real xrisk.