We completed the most comprehensive study of how economists and AI experts think AI will affect the U.S. economy.
They predict major AI progress—but no dramatic break from economic trends: GDP growth rates similar to today's and a moderate decline in labor force participation.
However, when asked to consider what would happen in a world with extremely rapid progress in AI capabilities by 2030, they predict significant economic impacts by 2050:
• Annualized GDP growth of 3.5% (compared to 2.4% in 2025)
• A labor force participation rate of 55% (roughly 10 million fewer jobs)
• 80% of wealth held by the top 10% (highest since 1939)
🧵 Here's what we found:
Economists expect substantial AI progress by 2030.
When asked to assign probabilities to slow, moderate, and rapid AI progress scenarios, they predict a:
• 39% chance of a slow progress scenario
• 47% chance of a moderate progress scenario
• 14% chance of a rapid progress scenario
Economists’ 61% chance of moderate or rapid AI progress by 2030 is roughly in line with other respondent groups (AI experts, superforecasters, and the general public).
As you can see below, even moderate progress is a significant advance from present-day AI capabilities.
Despite expecting significant AI progress, economists' overall (unconditional) forecasts for the economy stay close to today's trends.
Median economist forecasts:
• Annual GDP growth: 2.5% in 2030 and 2050 (compared to 2.4% in 2025)
• Labor force participation rate: 61% in 2030, 58% in 2050 (compared to 62.6% in 2025)
But, in a ‘rapid’ AI progress world, economists expect larger shifts (see red lines in figures below).
What explains economists giving a high chance of substantial AI progress, yet having an overall belief that economic outcomes won’t shift dramatically?
In their written rationales, economists cited the following reasons:
• slow and uneven diffusion of AI across sectors
• infrastructure bottlenecks (energy, chips, data centers)
• demographic and geopolitical headwinds
• long lags between the discovery of general-purpose technologies and measured productivity gains
The median view is closer to "AI will take a long time to show up in macroeconomic statistics and will offset demographic headwinds" than "AI won't matter at all."
Economists' median forecast of 2.5% annualized GDP growth by 2030 is still higher than most comparable forecasts used by government agencies and the private sector, which tend to be closer to 2%.
In a rapid AI progress scenario—which economists assigned a 14% probability to—economists expect much larger effects:
• Annual GDP growth: 3.3% in 2030, 3.5% in 2050 (roughly comparable to 1992–2001)
• Total factor productivity growth: 2% in 2030, 2.5% in 2050 (close to post-WWII levels)
• Labor force participation rate: 59% in 2030, 55% in 2050 (lower than in the 1950s)
That’s a richer economy, but also one where many fewer people work.
Impacts on wealth inequality show similar patterns.
In the rapid scenario, economists expect the top 10% of households to hold:
• 75% of U.S. wealth in 2030
• 80% in 2050
Compounded over decades, small differences in GDP growth can produce large differences in prosperity.
Economists’ forecast of 3.5% annual growth rate in the rapid scenario would lead to U.S. economic output of $54.7 trillion in 2050 (real GDP), 25% larger than the $43.7 trillion in the unconditional scenario, where economists predict a 2.5% annual growth rate.
This is roughly equivalent to the difference in U.S. GDP (economic output) between 2016 and today.
Most expert disagreement isn't about whether we get powerful AI systems—it's about what powerful AI systems would do to the economy.
For economists' 2030 GDP growth forecasts:
• 78.7% of the total variation in forecasts is driven by the uncertainty that each economist has about what will happen in a given scenario;
• 16.1% of the total variance is driven by disagreement about economic outcomes conditional on a given level of AI progress;
• Only 5.2% of the variance is between scenarios—attributable to disagreement about AI capabilities themselves, as described by our (imperfect) scenarios.
We see similar patterns for all other outcomes, and across expert groups.
We also asked economists what they thought of six policies that might address the impact of rapid AI progress.
Economists strongly favor targeted measures such as worker retraining, whereas the general public supports both targeted programs and broader interventions, including a job guarantee and universal basic income.
The most favored policy across all respondent groups was retraining support, which economists estimated would lead to a +0.2 p.p. increase in annual GDP growth and a +1 p.p. increase in LFPR in a world with rapid AI progress.
Thank you to all of our coauthors: @EzraKarger, Otto Kuusela, @Jabaluck, @Afinetheorem, @BasilHalperin, @toddrjones, @connacher_, @pawtrammell, @mattsreynolds1, @danmayland, Ria Viswanathan, Ananaya Mittal, Rebecca Ceppas de Castro, Josh Rosenberg, and @PTetlock
We hope this can be a useful tool for informing discourse on the economic impacts of AI. When you submit your forecasts, you will get a shareable image that compares your predictions to economists’ predictions. See example image below.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
What do experts and superforecasters think about the future of AI research and development?
In Wave 4 of the Longitudinal Expert AI Panel (LEAP), we asked top AI experts to forecast progress in AI R&D, hiring, company valuations, data center buildout, and more.
Here’s what you need to know 🧵
📈 AI benchmark progress is advancing faster than experts expect
AI performance on hard coding tasks is a useful indicator of potential capability increases in self-improving AI R&D: when models take an active role in improving AI itself.
The median forecaster significantly underestimated progress on LiveCodeBench Pro—a benchmark that tracks performance on tough programming tasks.
The median expert in our sample predicted state-of-the-art performance on LiveCodeBench Pro (Hard) of 14% in 2026 and 33% by the end of 2030. Since the survey closed, GPT-5.2 has already hit 33% on the benchmark.
A quarter of experts and superforecasters expect major progress on this coding benchmark, providing 50th percentile forecasts of at least 60% accuracy by 2030.
We plan to identify which forecasters were most accurate on questions like this to see what they believe about other topics.
🧑💻 Entry-level tech hiring may stay low for decades
Experts predict that the share of new hires with ≤1 year of experience at top tech firms will remain around 7% through 2040. Some respondents point to AI tools reducing the need for junior roles.
This is well below the 15% share of entry-level hires reached in 2019 and 2023.
🏆 In October, we invited external teams to submit to ForecastBench, our AI forecasting benchmark.
The challenge? Beat superforecasters—using any tools available (scaffolding, ensembling, etc).
The result? External submissions are now the most accurate models on our leaderboard—though superforecasters still hold #1.
@xai's model (grok-4-fast) is the leading external submission, at #2.
One of Cassi's entries takes the #3 spot
Here's what changed. 🧵
In October, we opened up ForecastBench’s tournament leaderboard to external submissions. Teams are free to use any tools they choose.
Several teams responded, including @xai, Cassi, @fractalai, @lightningrodai, and @_Mantic_AI. Thanks to all of them for participating on this challenging benchmark.
Models from @xai and Cassi outperformed all our baseline LLM configurations.
Here are the headline scores (lower is better, Brier):
• Superforecasters: 0.083
• grok-4-fast (external submission from @xai): 0.098
• ensemble_2_crowdadj (external submission from Cassi): 0.099
• @OpenAI’s GPT-5 (our own baseline run): 0.100
• @GoogleDeepMind’s Gemini-2.5-Pro (our own baseline run): 0.102
• @AnthropicAI’s Claude-Sonnet-4-5 (our own baseline run): 0.103
External submissions hold #2 and #3, ahead of all our baseline runs. However, all LLMs still lag behind superforecasters.
Today, we are launching the most rigorous ongoing source of expert forecasts on the future of AI: the Longitudinal Expert AI Panel (LEAP).
We’ve assembled a panel of 339 top experts across computer science, AI industry, economics, and AI policy.
Roughly every month—for the next three years—they’ll provide precise, falsifiable forecasts on the trajectory of AI capabilities, adoption, and impact.
Our results cover where experts predict major effects of AI, where they expect less progress than AI industry leaders, and where they disagree.
LEAP experts forecast major effects of AI by 2030, including:
⚡ 7x increase in AI’s share of U.S. electricity use (1% -> 7%)
🖥️ 9x increase in AI-assisted work hours (2% -> 18%)
By 2040, experts predict:
👥30% of adults will use AI for companionship daily
🏆60% chance that AI will solve or substantially assist in solving a Millennium Prize Problem
🚂32% chance that AI will have been at least as impactful as a "technology of the millennium," like the printing press or the Industrial Revolution.
🧵Read on for more insights and results
Our LEAP panel is made up of the following experts:
🧑🔬 76 Top computer scientists (e.g., professors from top-20 universities)
🤖 76 AI industry experts (from frontier model and other leading AI companies)
💲 68 Leading economists (including many studying economic growth or technology at top universities)
🧠 119 Policy and think tank experts
🏆 12 Honorees from TIME’s 100 most influential people in AI, in 2023 and 2024
(Plus 60 highly accurate superforecasters and 1,400 members of the U.S. public)
For more details on our sample, see the full reports linked below.
We ask questions designed to elicit high-quality, specific forecasts about the future of AI and its effects. Example questions include:
⚡ What % of US electricity will go towards training and deploying AI systems in 2027, 2030, and 2040?
🏢 What % of work hours will be assisted by generative AI in 2025, 2027, and 2030?
📊 At the end of 2040, how will people assess the impact of AI in comparison to past technological events?
In the first 3 waves of LEAP, we elicited forecasts on 18 questions regarding the future of AI. Wave 1 focused on the speed of AI progress and broad social impacts, Wave 2 on AI’s effect on science, and Wave 3 on AI adoption. For the full set of questions and results, see the reports linked below.
👇Superforecasters top the Tournament ForecastBench leaderboard, with a difficulty-adjusted Brier score of 0.081 (lower scores indicate higher accuracy).
🤖The best-performing LLM in our dataset is @OpenAI’s GPT-4.5 with a score of 0.101—a gap of 0.02.
A baseline model that always predicts 50% would yield a score of 0.25. Relative to this baseline, superforecasters are 68% and the best-performing LLM is 60% more accurate—a gap of 8 percentage points.
(Wondering about newer AI models? More on them later!)
We now have the first accuracy results from the largest-ever existential risk forecasting tournament.
In 2022, we convened 80 experts and 89 superforecasters for the Existential Risk Persuasion Tournament (XPT), which collected thousands of forecasts in 172 questions across short-, medium- and long-term time horizons.
We now have answers for 38 short-run questions covering AI progress, climate technology, bioweapons, nuclear weapons and more.
Here’s what we found out: 🧵
Respondents—especially superforecasters—underestimated AI progress.
Participants predicted the state-of-the-art accuracy of ML models on the MATH, MMLU, and QuaLITY benchmarks by June 2025.
Domain experts assigned probabilities of 21.4%, 25%, and 43.5% to the achieved outcomes.
Superforecasters assigned even lower probabilities: just 9.3%, 7.2%, and 20.1% respectively.
The International Mathematical Olympiad results were even more surprising.
AI systems achieved gold-level performance at the IMO in July 2025.
Superforecasters assigned this outcome just a 2.3% probability. Domain experts put it at 8.6%.
Today, we're excited to announce ForecastBench: a new benchmark for evaluating AI and human forecasting capabilities. Our research indicates that AI remains worse at forecasting than expert forecasters. 🧵
Evaluating LLM forecasting ability is tricky! Prior work asks models about events that already have (or have not) occurred, risking contamination of training data.
Our solution is to use questions about future events, the outcomes of which are unknowable when forecasts are made.
ForecastBench continuously generates new questions about future events, testing the ability of AI models and humans to make accurate probabilistic predictions across diverse domains.