Our 6th installment is one of the most exciting years I can remember. The #stateofai report covers everything you *need* to know, covering research, industry, safety and politics.
There’s lots in there, so here’s my director’s cut 🧵
2023 was of course the year of the LLM, with the world being stunned by @OpenAI’s GPT-4.
GPT-4 succeeded in beating every other LLM - both on classic AI benchmarks, but also on exams designed for humans.
We’re also seeing a move away from openness, amid safety and competition concerns.
@OpenAI published a very limited technical report for GPT-4, @Google published little on PaLM2, @AnthropicAI simply didn’t bother for Claude…or Claude 2.
However, @AIatMeta and others are keeping the open source flame burning by producing and releasing competitive open LLMs that are capable of matching many of GPT-3.5’s capabilities.
Judging by the leaderboards over at @HuggingFace, open source is more vibrant than ever, with downloads and model submissions rocketing to record highs.
Remarkably, in the last 30 days Llama models have been downloaded more than 32M times on Hugging Face 🚀
While we have many different benchmarks (largely academic) to assess the performance of LLM systems, it often feels like the eval to rule all evals is one with the utmost scientific and engineering grounding: “vibes”
Beyond the excitement of the LLM vibesphere, researchers, including from @Microsoft have been exploring the possibility of small language models, finding that models trained with highly specialized datasets can rival 50x larger competitors.
This work might become all the more urgent if the team over at @EpochAIResearch are correct.
They’ve predicted that we risk exhausting the stock of high-quality language data in the next *two years* - prompting labs to explore alternative sources of training data.
All of this work means it’s a good time to be in the hardware business, especially if you’re @nvidia.
GPU demand drove them into the $1T market cap club and their chips are used 19x more in AI research than *all the alternatives combined*.
While @nvidia continues to ship new chips, their older GPUs exhibit remarkable lifetime value.
The V100, released in 2017, was the most popular GPU in AI research papers in 2022. It might cease to be used in 5 years, which means it’ll have served 10 years.
In perhaps the least surprising news at this point, Chat-GPT is one of the fastest growing internet products ever.
But data from @sequoia shows there is reason to doubt the staying power of GenAI for the moment - with shaky retention rates for image gen to AI partners.
Outside the world of consumer software, there are signs that GenAI could accelerate progress in the world of embodied AI.
@wayve_ai’s GAIA-1 displays impressive generalization and could act as a powerful tool for training and validating autonomous driving models.
The market for AI-first defense is roaring to life as the militaries rush to modernize capabilities in response to asymmetric warfare we see in Ukraine.
However, the clash between new technology and old incumbents is making it hard for new entrants to get their foot in the door.
These successes aside, the weight of the venture industry is resting on the shoulders of GenAI, which is holding up the sky of the tech private markets like Atlas.
Without the GenAI boom, AI investments would’ve crashed by 40% versus last year.
The authors of the landmark paper that introduced transformer-based neural nets are living proof of this - the transformer mafia have collectively raised billions of dollars in 2023 alone.
We’ve updated our popular slides from last year :-)
The same is true of the DeepSpeech2 team at @Baidu_Inc's Silicon Valley AI Lab.
Their work on deep learning for speech recognition showed us the scaling laws that now underpin large-scale AI.
Much of the team went on to be founders or senior execs at leading ML companies.
Many of the most high-profile blockbuster fundraises weren’t led by traditional VC firms at all.
2023 was the year of corporate venture, with Big Tech putting its war chest to effective use.
Unsurprisingly, billions of dollars of investment and huge leaps forward in capabilities have placed AI at the top of policymakers’ agendas.
The world is clustering around a handful of regulatory approaches - ranging from the light-touch through to the highly restrictive.
Potential proposals for global governance have been floated, with an alphabet soup of institutional acronyms being invoked as precedent.
The UK’s AI Safety Summit, being organized by @matthewclifford and others may help start to crystallize some of this thinking.
Past #stateofai reports warned that safety was being neglected by the big labs.
2023 was the year of the x-risk debate, with the open vs. closed debate intensifying among researchers and the extinction risk making headlines.
…needless to say, not everyone agrees - with @ylecun and @pmarca emerging as the skeptics-in-chief.
Unsurprisingly policymakers are alarmed and have been trying to build out their knowledge of potential risks directly.
The UK has moved first to set up a dedicated Frontier AI Taskforce led by @soundboy, and the US launched congressional investigations.
As ever, in the spirit of transparency, we graded last year’s predictions - we scored 5/9
✅ on LLM training, GenAI/audio, Big Tech going all in on AGI, alignment investment, and training data
❌ for multi-model research, biosafety lab regulation, and doom for semis start-ups
Here are our 10 predictions for the next 12 months! Covering:
- GenAI/film-making
- AI and elections
- Self-improving agents
- The return of IPOs
- $1 billion+ models
- Competition investigations
- Global governance
- Banks + GPUs
- Music
- Chip acquisitions
The report is a team effort, and we were one member short, with @soundboy stepping back to focus on the UK’s Frontier AI taskforce.
Many thanks to @osebbouh for his 3rd year, along w/@corina_gurau and @chalmermagne for their debut appearances.
new on @airstreetpress: @percyliang of @stanford and @togethercompute, who joined our @stateofaireport launch in SF a few weeks ago, answers a few questions on truly open AI.
We talk about why it matters, where the field’s going wrong and some solutions.
First up, the term ‘open source’ is often a bit of a misnomer.
If we apply the bar for open source we use for most software to LLMs - they fail.
At the moment, it’s hard to interpret or compare models and claimed capabilities fairly.
It’s already proving tough to replicate many frontier labs’ advertised performance.
Our seventh installment is our biggest and most comprehensive yet, covering everything you *need* to know about research, industry, safety and politics.
As ever, here's my director’s cut (+ video tutorial!) 🧵
For a while, it looked like @OpenAI’s competitors had succeeded in closing the gap, with frontier lab performance converging significantly as the year went on…
…but it was not to last, as inference-time compute and chain-of-thought drove stunning early results from o1.
Open source is one of the biggest drivers of progress in software - AI would be unrecognizable without it.
However, it is under existential threat from both regulation and well-funded lobby groups.
The community needs to defend it vigorously. 🧵
While open source may win a partial stay-of-execution in the EU AI Act, a large number of well-funded lobbying organizations are trying to ban already existing open source models.
And publication and disclosure norms are often being undermined on, frankly, flimsy safety grounds.
Summer is my queue to start pulling together narratives for @stateofaireport.
By '20, it was clear to me that biology was experiencing its "AI moment": a flurry of AI+bio papers and AlphaFold 2.
In summer '21, I dove deeper and crossed paths with Ali's work at @SFResearch...
In a preprint entitled "Deep neural language modeling enables functional protein generation across families" Ali's team showed that AI can learn the language of biology to create artificial proteins that are both functional and unseen in nature.
Without an efficient engine to transform our R&D spend into real-world companies and products, how are we to see British inventions improve lives, deliver value to our society and strengthen our economy?
<5% of the £24B raised by UK startups in 2022 went to spinouts.
I’ve collected data from more than 200 founders via spinout.fyi, a website I set up to monitor spinout performance.
Too many founders are stuck for months to years in an opaque negotiation with their university, wielding no bargaining power.