So after all these hours talking about AI, in these last five minutes I am going to talk about:
Horses.
Engines, steam engines, were invented in 1700.
And what followed was 200 years of steady improvement, with engines getting 20% better a decade.
For the first 120 years of that steady improvement, horses didn't notice at all.
Then, between 1930 and 1950, 90% of the horses in the US disappeared.
Progress in engines was steady. Equivalence to horses was sudden.
But enough about horses. Let's talk about chess!
Folks started tracking computer chess in 1985.
And for the next 40 years, computer chess would improve by 50 Elo per year.
That meant in 2000, a human grandmaster could expect to win 90% of their games against a computer.
But ten years later, the same human grandmaster would lose 90% of their games against a computer.
Progress in chess was steady. Equivalence to humans was sudden.
Enough about chess! Let's talk about AI.
Capital expenditure on AI has been pretty steady.
Right now we're - globally - spending the equivalent of 2% of US GDP on AI datacenters each year.
That number seems to have steadily been doubling over the past few years.
And it seems - according to the deals signed - likely to carry on doubling for the next few years.
But from my perspective, from equivalence to me, it hasn't been steady at all.
I was one of the first researchers hired at Anthropic.
This pink line, back in 2024, was a large part of my job. Answer technical questions for new hires.
Back then, me and other old-timers were answering about 4,000 new-hire questions a month.
Then in December, Claude finally got good enough to answer some of those questions for us.
In December, it was some of those questions. Six months later, 80% of the questions I'd been being asked had disappeared.
Claude, meanwhile, was now answering 30,000 questions a month; eight times as many questions as me & mine ever did.
Now. Answering those questions was only part of my job.
But while it took horses decades to be overcome, and chess masters years, it took me all of six months to be surpassed.
Surpassed by a system that costs one thousand times less than I do.
A system that costs less, per word thought or written, than it'd cost to hire the cheapest human labor on the face of the planet.
And so I find myself thinking a lot about horses, nowadays.
In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.
And not very long after, 93 percent of those horses had disappeared.
I very much hope we'll get the two decades that horses did.
But looking at how fast Claude is automating my job, I think we're getting a lot less.
This was a five-minute lightning talk given over the summer of 2025 to round out a small workshop.
All opinions are my own and not those of my employer.
Principle result is that by studying a sequence of small problems in ML, I could predict the outcome of experiments on orders-of-magnitude larger problems 🤯
I worked on Hex. Hex is a board game, with all the strategic depth of Go but also a much simpler rule set. Crucially, Hex on small boards is easy, and Hex on big boards is hard!
I wrote a fast, all-GPU version of AlphaZero, and used it to train ~200 different neural nets across a bunch of board sizes. Plotted together, the best-performing nets at each level of compute form a steady trend, the *compute frontier*
I can't recall any _techniques_ that knocked me off my chair, but there have been a couple of papers on training phenomena which have had a serious impact on how I think about RL:
'Meta learner's dynamics are unlike learners': you ask a regular NN to learn a transformation and it'll learn the component with the largest eigenvalue first, then the second largest, etc etc. A meta-learner will learn all the components simultaneously! arxiv.org/abs/1905.01320
'Ray Interference': whenever an agent can choose between something it's good at and something it's bad at, it'll focus on the thing it's good at and so make no progress on the thing it's bad. Obvious when it's said like that, but was a revelation to me! arxiv.org/abs/1904.11455