🚨The White House just launched the Genesis Mission — a Manhattan Project for AI
The Department of Energy will build a national AI platform on top of U.S. supercomputers and federal science data, train scientific foundation models, and run AI agents + robotic labs to automate experiments in biotech, critical materials, nuclear fission/fusion, space, quantum, and semiconductors.
Let’s unpack what this order actually builds, and how it could rewire the AI, energy, and science landscape over the next decade:
1/ At the core is a new American Science and Security Platform.
DOE is ordered to turn the national lab system into an integrated stack that provides:
• HPC for large-scale model training, simulation, inference
• Domain foundation models across physics, materials, bio, energy
• AI agents to explore design spaces, evaluate experiments, automate workflows
• Robotic/automated labs + production tools for AI-directed experiments and manufacturing
National-scale AI scientist + AI lab tech as infrastructure.
2/ The targets are very explicit and very strategic.
Within 60 days, DOE has to propose at least 20 “national challenges” in:
This is about energy dominance, supply chains, and defense.
3/ The timelines are aggressive enough to matter:
• 60 days → list of challenges
• 90 days → full inventory of federal compute/network/storage for Genesis
• 120 days → initial model + data assets, plus a plan to ingest more datasets (other agencies, academia, private sector)
• 240 days → map all robotic labs and automated facilities across national labs
• 270 days → demonstrate initial operating capability on at least one challenge
Goal: A Functioning AI-for-science loop online in <9 months.
4/ This also formalizes a federal AI stack parallel to the commercial one.
The order tells DOE and the White House science office to:
• align agency AI programs and datasets onto this platform
• run joint funding calls and prize programs
• build partnership frameworks with external players (co-dev agreements, user facilities, data/model sharing, IP rules)
Nvidia, OpenAI, Anthropic, xAI, Google, the clouds, biotech & chip companies are now potential suppliers and co-developers for a DOE AI system
5/ Genesis marks a clear shift
Until now, frontier AI has mostly been driven by private labs. With Genesis, the U.S. is explicitly building a state-run AI backbone for science, energy, and security:
• DOE coordinates a national AI-for-science platform
• National labs and supercomputers become part of a unified AI stack
• Models, agents, and robotic labs are treated as strategic infrastructure, not just tools
The questions now are who supplies the compute and models, how IP and data are shared, and how fast other countries launch their own Genesis-style efforts?
On ProofBench-Advanced—where models prove formal mathematical theorems—GPT-5 scores 20%. Gemini Deep Think IMO Gold hits 65.7%. DeepSeek Math V2 (Heavy) scores 61.9%.
That's second place—but Gemini isn't open source.
This is the best open math model in the world. And DeepSeek released the weights. Apache 2.0.
Here's what they discovered:
1/ Why Normal LLMs Break on Real Math
Most large language models are great at sounding smart, but:
- They’re rewarded for the final answer, not the reasoning.
- If they accidentally land on the right number with bad logic, they still get full credit.
- Over time they become “confident liars”: fluent, persuasive, and sometimes wrong.
That’s fatal for real math, where the proof is the product.
To fix this, DeepSeek Math V2 changes what the model gets rewarded for: not just being right, but being rigorously right.
2/ The Core Idea: Generator + Verifier
Instead of one model doing everything, DeepSeek splits the job: 1. Generator – the “mathematician”
- Produces a full, step-by-step proof.
2. Verifier – the “internal auditor”
- Checks the proof for logical soundness.
- Ignores the final answer. It only cares about the reasoning.
This creates an internal feedback loop:
One model proposes, the other critiques.
Battery storage is already scaling—159 GW deployed globally, 926 GW projected by 2033.
Renewables needed it first. Now AI needs it too.
Tesla is deploying Megapacks at data centers. China is deploying 30 GW this year, integrating storage directly into AI buildout.
Why? Data centers can’t scale without solving three problems:
- 7-year interconnection queues
- power quality GPUs demand
- backup without diesel permits
Batteries solve all three ↓
Why AI Data Centers Need Batteries
Interconnection is broken. Utility connection takes 7+ years. Batteries bypass it. Skip the queue.
GPUs break traditional power. Training loads swing 90% at 30 Hz. Batteries smooth it in 30 milliseconds.
Diesel doesn’t scale. Permitting is hard. For 20-hour backup, batteries are cost-competitive.
The math: ~1% of data center capex.
The Scale
Global capacity: 159 GW by end-2024. Up 85% from 86 GW in 2023. Projected: 926 GW by 2033.
Cost curve: $115/kWh in 2024, down 84% from $723/kWh in 2013. Still falling.
Economics flipped. Solar plus 4-hour storage runs ~$76/MWh. New gas peakers cost $80-120/MWh.
The universe isn’t just expanding — it’s speeding up
13.8 billion years after the Big Bang, astronomers expected gravity to slowly slow cosmic expansion. Instead, when they looked deep into space, they found the opposite: the universe is accelerating.
Whatever drives that acceleration makes up ~70% of the cosmos.
We call it dark energy.
We can measure it. We can see its effects. So what is it, really?
How we figured this out
Cepheid stars: the distance trick
Henrietta Leavitt discovered that certain stars (Cepheid variables) get brighter and dimmer with a regular period — and that period tells you their true brightness → lets us measure distance to faraway galaxies.
Redshift: galaxies on the move
Vesto Slipher used spectra of galaxies to show many had their light stretched to longer, redder wavelengths.
Redder → moving away faster.
Hubble & the expanding universe
Edwin Hubble and Milton Humason combined Cepheid distances with redshift and found a pattern:
>The farther a galaxy is, the faster it’s receding.
That’s the Hubble–Lemaître law: clear evidence that the universe is expanding.
The shock: expansion is accelerating
In the 1990s, two teams studied Type Ia supernovae, stellar explosions so consistent in brightness that they act like “standard candles.”
By comparing how bright they should be to how bright they look, you can get distance.
By measuring redshift, you get how fast they’re moving away.
The surprise:
• The supernovae were dimmer and farther away than expected.
• That only made sense if, over billions of years, the universe’s expansion had sped up instead of slowing down.
This cosmic acceleration is what we now attribute to dark energy.
It pulls in nearly $60B per quarter — almost all from a handful of hyperscalers who plan their AI roadmaps around Jensen's release cycle.
But three shifts are happening at once:
• Google is committing up to one million TPUs to Anthropic starting 2026 — the first credible alternative at frontier scale.
• Racks are already pushing hundreds of kilowatts, with megawatt systems on the horizon.
• Nvidia has $26B in commitments to rent back its own GPUs from cloud partners — up from $12.6B last quarter.
The real constraint isn't chips anymore — it's power and memory.
Over the next 3–5 years, this creates a fractured landscape: Nvidia GPUs as the default utility, Google TPUs as a real second ecosystem, and hyperscalers racing to escape the Nvidia tax.
Let’s walk through how that actually plays out:
1/ Nvidia now: dominant, concentrated, and structurally exposed
Nvidia's latest quarter (fiscal Q3 2026) is extreme:
• $57B in revenue, +62% YoY
• $51.2B from data center alone
But it’s dangerously concentrated:
• 4 customers = 61% of sales (up from 56% last quarter).
And Nvidia is renting back its own chips:
• $26B in off-balance-sheet commitments to pay hyperscalers for GPUs they can’t fully rent out, up from $12.6B the prior quarter.
That creates a circular-demand loop:
• sell chips to clouds → invest in AI customers → rent those same chips back when there’s slack.
Not a crisis. But a structural dependency that didn’t exist two years ago.
2/ TPUs: no longer just for Google
Google's 7th-gen TPU (Ironwood) is the first built for inference over training.
Why that matters: the bottleneck is shifting. Training a frontier model is a one-time cost. Serving it to billions of users is the recurring expense that actually scales.
The specs reflect this:
• Pods scale to 9,216 accelerators
• 1.77 PB of HBM3E memory per pod
• 9.6 Tb/s optical circuit-switching fabric
That memory pool and interconnect matter more than peak FLOPs. Large inference workloads are memory-bandwidth bound. Ironwood is designed around that reality.
Google's framing: "The hardest part is now serving AI to billions of users."