🚨The White House just launched the Genesis Mission — a Manhattan Project for AI
The Department of Energy will build a national AI platform on top of U.S. supercomputers and federal science data, train scientific foundation models, and run AI agents + robotic labs to automate experiments in biotech, critical materials, nuclear fission/fusion, space, quantum, and semiconductors.
Let’s unpack what this order actually builds, and how it could rewire the AI, energy, and science landscape over the next decade:
1/ At the core is a new American Science and Security Platform.
DOE is ordered to turn the national lab system into an integrated stack that provides:
• HPC for large-scale model training, simulation, inference
• Domain foundation models across physics, materials, bio, energy
• AI agents to explore design spaces, evaluate experiments, automate workflows
• Robotic/automated labs + production tools for AI-directed experiments and manufacturing
National-scale AI scientist + AI lab tech as infrastructure.
2/ The targets are very explicit and very strategic.
Within 60 days, DOE has to propose at least 20 “national challenges” in:
This is about energy dominance, supply chains, and defense.
3/ The timelines are aggressive enough to matter:
• 60 days → list of challenges
• 90 days → full inventory of federal compute/network/storage for Genesis
• 120 days → initial model + data assets, plus a plan to ingest more datasets (other agencies, academia, private sector)
• 240 days → map all robotic labs and automated facilities across national labs
• 270 days → demonstrate initial operating capability on at least one challenge
Goal: A Functioning AI-for-science loop online in <9 months.
4/ This also formalizes a federal AI stack parallel to the commercial one.
The order tells DOE and the White House science office to:
• align agency AI programs and datasets onto this platform
• run joint funding calls and prize programs
• build partnership frameworks with external players (co-dev agreements, user facilities, data/model sharing, IP rules)
Nvidia, OpenAI, Anthropic, xAI, Google, the clouds, biotech & chip companies are now potential suppliers and co-developers for a DOE AI system
5/ Genesis marks a clear shift
Until now, frontier AI has mostly been driven by private labs. With Genesis, the U.S. is explicitly building a state-run AI backbone for science, energy, and security:
• DOE coordinates a national AI-for-science platform
• National labs and supercomputers become part of a unified AI stack
• Models, agents, and robotic labs are treated as strategic infrastructure, not just tools
The questions now are who supplies the compute and models, how IP and data are shared, and how fast other countries launch their own Genesis-style efforts?
The universe isn’t just expanding — it’s speeding up
13.8 billion years after the Big Bang, astronomers expected gravity to slowly slow cosmic expansion. Instead, when they looked deep into space, they found the opposite: the universe is accelerating.
Whatever drives that acceleration makes up ~70% of the cosmos.
We call it dark energy.
We can measure it. We can see its effects. So what is it, really?
How we figured this out
Cepheid stars: the distance trick
Henrietta Leavitt discovered that certain stars (Cepheid variables) get brighter and dimmer with a regular period — and that period tells you their true brightness → lets us measure distance to faraway galaxies.
Redshift: galaxies on the move
Vesto Slipher used spectra of galaxies to show many had their light stretched to longer, redder wavelengths.
Redder → moving away faster.
Hubble & the expanding universe
Edwin Hubble and Milton Humason combined Cepheid distances with redshift and found a pattern:
>The farther a galaxy is, the faster it’s receding.
That’s the Hubble–Lemaître law: clear evidence that the universe is expanding.
The shock: expansion is accelerating
In the 1990s, two teams studied Type Ia supernovae, stellar explosions so consistent in brightness that they act like “standard candles.”
By comparing how bright they should be to how bright they look, you can get distance.
By measuring redshift, you get how fast they’re moving away.
The surprise:
• The supernovae were dimmer and farther away than expected.
• That only made sense if, over billions of years, the universe’s expansion had sped up instead of slowing down.
This cosmic acceleration is what we now attribute to dark energy.
It pulls in nearly $60B per quarter — almost all from a handful of hyperscalers who plan their AI roadmaps around Jensen's release cycle.
But three shifts are happening at once:
• Google is committing up to one million TPUs to Anthropic starting 2026 — the first credible alternative at frontier scale.
• Racks are already pushing hundreds of kilowatts, with megawatt systems on the horizon.
• Nvidia has $26B in commitments to rent back its own GPUs from cloud partners — up from $12.6B last quarter.
The real constraint isn't chips anymore — it's power and memory.
Over the next 3–5 years, this creates a fractured landscape: Nvidia GPUs as the default utility, Google TPUs as a real second ecosystem, and hyperscalers racing to escape the Nvidia tax.
Let’s walk through how that actually plays out:
1/ Nvidia now: dominant, concentrated, and structurally exposed
Nvidia's latest quarter (fiscal Q3 2026) is extreme:
• $57B in revenue, +62% YoY
• $51.2B from data center alone
But it’s dangerously concentrated:
• 4 customers = 61% of sales (up from 56% last quarter).
And Nvidia is renting back its own chips:
• $26B in off-balance-sheet commitments to pay hyperscalers for GPUs they can’t fully rent out, up from $12.6B the prior quarter.
That creates a circular-demand loop:
• sell chips to clouds → invest in AI customers → rent those same chips back when there’s slack.
Not a crisis. But a structural dependency that didn’t exist two years ago.
2/ TPUs: no longer just for Google
Google's 7th-gen TPU (Ironwood) is the first built for inference over training.
Why that matters: the bottleneck is shifting. Training a frontier model is a one-time cost. Serving it to billions of users is the recurring expense that actually scales.
The specs reflect this:
• Pods scale to 9,216 accelerators
• 1.77 PB of HBM3E memory per pod
• 9.6 Tb/s optical circuit-switching fabric
That memory pool and interconnect matter more than peak FLOPs. Large inference workloads are memory-bandwidth bound. Ironwood is designed around that reality.
Google's framing: "The hardest part is now serving AI to billions of users."
The U.S. Power Crisis: How AI Data Centers Are Breaking the Grid
AI data centers are on track to become one of the biggest single loads on the U.S. grid. Data center electricity use is projected to jump from 176 TWh in 2023 to 450–580 TWh by 2028—up to 12% of all U.S. electricity.
That surge is slamming into a grid already strained by aging infrastructure, generator retirements, transformer shortages, and a collapse in transmission build-out.
By 2028, the U.S. faces a 13–73 GW shortfall of firm capacity—enough to power 3–18 million homes. This isn’t a distant 2040 climate scenario; it’s a 2025–2028 crunch already showing up in higher bills and growing reliability risks.
What does the next decade look like? Who pays for it? Here's the full breakdown:
The Demand Shock: A Collision with Reality
For two decades, U.S. electricity demand was flat. That era is over.
• The AI Factor: Traditional data centers consume 5-10 kW per rack. AI clusters require 60+ kW per rack—a 6-10x increase.
• Scale: A single cluster of 100,000 NVIDIA H100 GPUs consumes roughly 150 MW, enough to power a small city.
• The Timeline Mismatch: You can build a data center in 2-3 years. A power plant takes 5-15 years; transmission lines take 7-20 years.
Demand is simply outrunning the physical ability to build infrastructure.
Generation: Building Too Little, Too Late
We need to double the pace of generation additions, but every path forward is blocked.
• Natural Gas (The Only Near-Term Fix): Gas is the only baseload power deployable by 2028. However, turbines face 7-year wait times, and new plants face intense environmental opposition.
• Nuclear (The 2030s Solution): Despite hype from Amazon, Google, and Microsoft regarding Small Modular Reactors (SMRs), zero commercial SMRs will be online by 2028. The earliest optimistic timelines are 2030-2035.
• Renewables & Storage: Solar and batteries are growing fast, but they lack the "capacity factor" needed for 24/7 AI operations. Batteries are great for peak shaving, not multi-day backup.
• Coal: In a desperate move, utilities in Nebraska and Maryland are delaying coal plant retirements just to keep the lights on.
Yann LeCun has been saying this for years. Now he's leaving Meta to prove it.
LeCun invented convolutional neural networks—the tech behind every smartphone camera and self-driving car today. He won the Turing Award in 2018, AI's Nobel Prize.
At 65, the leader of Meta's FAIR research lab is walking away from $600 billion in AI infrastructure, betting against the entire industry: Meta, OpenAI, Anthropic, xAI, Google.
Who is @ylecun? Why is he leaving, and why does his next move matter? Here's the story:
Who is Yann LeCun?
- Created convolutional neural networks (CNNs) in the 1980s — now foundational to computer vision
- Built LeNet at Bell Labs → first large-scale application of deep learning (bank check reading)
- Won Turing Award (2018) with Hinton & Bengio
- Joined Meta 2013, founded FAIR (Fundamental AI Research)
- Built a culture of open research: publishing freely, releasing open models
He's one of the "godfathers of deep learning."
LeCun's Core Technical Position
LeCun has been consistent since 2022: Large language models have fundamental limitations. They predict text patterns but lack:
- Understanding of physical world dynamics
- Persistent memory
- Causal reasoning
- Goal-directed planning
His famous analogy: "We can't even reproduce cat intelligence or rat intelligence, let alone dog intelligence."
He advocates for "world models" — AI systems that learn by observing the physical world, not just reading text. This represents not a rejection of LLMs as useless, but a belief they are insufficient as a path to general intelligence.
Two years ago, everyone was hiring.
One year ago, layoffs started.
Today?
According to recent Federal Reserve Bank of New York analysis:
- 33,281 tech layoffs in October 2025—highest monthly total in 20 years
- Over 141,000 tech workers laid off in 2025 (through October)
- Computer Science graduates: 6.1% unemployment
- Philosophy majors: 3.2% unemployment
- CS majors now face nearly twice the unemployment rate of philosophy majors
Everyone thinks AI is replacing jobs.
But that's not what's happening.
But senior engineers continue experiencing strong demand.
If AI makes coding more efficient, why this split? Let's dive in:
AI Isn't Taking Your Job: What's Really Happening in Tech Hiring
Young professionals aged 22-25 face the most challenging entry-level job market in decades across multiple knowledge-work industries.
Entry-level position declines from 2022 peaks:
- Tech jobs at Big Tech firms: Down ~50%
- Management consulting analyst roles: Down 35%
- Investment banking analysts: Down 30%
- Marketing coordinator positions: Down 28%
New graduate hiring has collapsed:
- 2023: New graduates represented 25% of tech hires
- 2024: Dropped to approximately 7%
This represents a 72% year-over-year decline in new graduate hiring rates.
Why Companies Stopped Hiring Juniors
When Google CEO Sundar Pichai announced that AI generates over 25% of their code—with senior engineers reviewing every line—companies made a calculation:
"Why hire three junior developers to write boilerplate when AI can generate it and one senior can review it?"
This logic has three problems—but companies adopted it anyway.
First, it assumes AI productivity gains materialize as advertised.
Second, it ignores the long-term talent pipeline.
Third, it overlooks that AI isn't actually the primary driver of these cuts.
To understand what's really happening, we need to examine whether AI delivers on its promises.