Est. 2018, @soundboy and I compile the most important work in AI research, industry, talent, and politics to inform conversation about the #stateofai. Our report is open-access to all.
This year, we have seen AI become increasingly pivotal to breakthroughs in everything from drug discovery (ref: @exscientiaAI@RecursionPharma - 2020 Report IPO predictions!) to mission critical infrastructure like electricity grids and logistics warehouses.
Working with @OpenClimateFix, the UK's @NationalGridESO managed to halve the error of electricity demand forecast using a transformer-based prediction system. This could lead to lower carbon emissions and costs.
In industrial facilities across 30 cities in 15 countries, @intenseye's real-time computer vision software protects employees from >35 types of health and safety incidents that would otherwise go unseen.
With computer vision use disseminating across even more and more visual tasks, ranging from KYC on new customers joining trading platforms en masse during the pandemic to the interpretation of 3D medical scans. Model-in-the-loop training for HQ data comes to the fore @V7Labs
And as the world moved online almost overnight putting our logistics infrastructure to the test, deep learning systems helped automate 98% of stock replenishment decisions for online grocers every day @OcadoTechnology
This year’s report looks particularly at the emergence of transformer models, a technique to focus machine learning algorithms on important relationships between data points to extract meaning more comprehensively for better predictions. Starting in NLP, they're now everywhere.
Powering breakthroughs in protein structure prediction @DeepMind
To multimodal self-supervision, zero-shot learning, and image generation @OpenAI
And while AI’s growing impact on society and the economy is now evident, our report highlights that research into AI safety and the impact of AI still lags behind its rapid commercial, civil, and military deployment.
Notably, <100 people work on AI Alignment in 7 leads AI orgs.
AI researchers have traditionally seen the AI arms race as a figurative one -- simulated dogfights between competing AI systems carried out in labs -- but that is changing with reports of recent use of autonomous weapons by various militaries.
With governments revving up not only the rhetoric, but matching it with real money.
Meanwhile, new governance experiments are taking shape in the AI ecosystem: @AnthropicAI as a public benefit corporation, @huggingface as an open source private company, or even EleutherAI as an open source Discord server-based community with no company attached.
🦄 In industry, there are more AI unicorns than ever before -- 182 by our latest count -- that total $1.3T of combined enterprise value🔥. This would have been unfathomable back in 2018 when we first created this report @dealroomco
Importantly, we saw a huge volume of exits in the last 12 months -- €750B across M&A, secondaries, IPOs, SPACs -- whether it's Nuance/MSFT or IPOs for @SentielOne, @Darktrace, @RecursionPharma, @exscientiaAI and more...@dealroomco
But before we take a peek into our predictions for 2021, let's review those from 2020!🚦 5/8 = YES! 🤓 2/8 = NOPE :-(
1/8= sorta, kinda...
OK, feeling confident now, let's look at 2021...
Here are @stateofaireport predictions for 2021: a mix across technical breakthroughs, politics, and industry news! What do you think?
The @stateofaireport is always a collaborative project designed as a public good and we’re incredibly grateful to @osebbouh - our star Research Assistant - as well as our Contributors and Reviewers, all of whom played a part in making the report what it is.
OK, now head over to stateof.ai to read all 188 slides at your own leisure and let us know what you think! Thank you 🙏
• • •
Missing some Tweet in this thread? You can try to
force a refresh
new on @airstreetpress: @percyliang of @stanford and @togethercompute, who joined our @stateofaireport launch in SF a few weeks ago, answers a few questions on truly open AI.
We talk about why it matters, where the field’s going wrong and some solutions.
First up, the term ‘open source’ is often a bit of a misnomer.
If we apply the bar for open source we use for most software to LLMs - they fail.
At the moment, it’s hard to interpret or compare models and claimed capabilities fairly.
It’s already proving tough to replicate many frontier labs’ advertised performance.
Our seventh installment is our biggest and most comprehensive yet, covering everything you *need* to know about research, industry, safety and politics.
As ever, here's my director’s cut (+ video tutorial!) 🧵
For a while, it looked like @OpenAI’s competitors had succeeded in closing the gap, with frontier lab performance converging significantly as the year went on…
…but it was not to last, as inference-time compute and chain-of-thought drove stunning early results from o1.
Open source is one of the biggest drivers of progress in software - AI would be unrecognizable without it.
However, it is under existential threat from both regulation and well-funded lobby groups.
The community needs to defend it vigorously. 🧵
While open source may win a partial stay-of-execution in the EU AI Act, a large number of well-funded lobbying organizations are trying to ban already existing open source models.
And publication and disclosure norms are often being undermined on, frankly, flimsy safety grounds.
Our 6th installment is one of the most exciting years I can remember. The #stateofai report covers everything you *need* to know, covering research, industry, safety and politics.
There’s lots in there, so here’s my director’s cut 🧵
2023 was of course the year of the LLM, with the world being stunned by @OpenAI’s GPT-4.
GPT-4 succeeded in beating every other LLM - both on classic AI benchmarks, but also on exams designed for humans.
We’re also seeing a move away from openness, amid safety and competition concerns.
@OpenAI published a very limited technical report for GPT-4, @Google published little on PaLM2, @AnthropicAI simply didn’t bother for Claude…or Claude 2.
Summer is my queue to start pulling together narratives for @stateofaireport.
By '20, it was clear to me that biology was experiencing its "AI moment": a flurry of AI+bio papers and AlphaFold 2.
In summer '21, I dove deeper and crossed paths with Ali's work at @SFResearch...
In a preprint entitled "Deep neural language modeling enables functional protein generation across families" Ali's team showed that AI can learn the language of biology to create artificial proteins that are both functional and unseen in nature.