🚨 Did You Know: 10 years ago, Infosys was one of the earliest backers of OpenAI. They invested alongside Elon Musk, Peter Thiel, AWS, and others ($1B → ~$45B today).
Instead of doubling down, they fired their CEO Vishal Sikka, and now their stake is worth nothing.
How could this possibly happen? Who is Vishal? More below:
1/ December 2015: When Infosys Bet on OpenAI
While most tech executives were still googling "machine learning," one CEO saw the AI revolution coming.
Vishal Sikka, CEO of Infosys, committed the company to back OpenAI alongside tech's biggest names.
But he wasn't your typical IT services CEO.
He understood something most executives missed: AI was about to eat software.
2/ Meet the Visionary: Vishal Sikka
- First non-founder CEO of Infosys
- PhD in AI from Stanford
- Studied under John McCarthy (coined "Artificial Intelligence")
- Mentored by Marvin Minsky (AI's founding father)
He didn't join Infosys to run an IT services company.
He came to transform it.
3/ Sikka’s 2015 prediction: AI will reshape Infosys:
"Most of our work is in building and maintaining software systems, and AI will increasingly shape the construction and evolution of intelligent software systems, in all kinds of domains and industries."
"As a large services company, many parts of our work can transform fundamentally with AI."
His thesis was simple:
- Infosys had 150,000 engineers doing repetitive work
- AI would automate that work
He saw what other IT leaders missed.
4/ OpenAI, the Nonprofit (2015)
OpenAI was structured as a nonprofit research lab dedicated to ensuring artificial general intelligence would benefit all of humanity.
This seemed noble at the time. So Infosys structured their commitment as a charitable donation, not an equity investment.
5/ The War Inside Infosys (Why Things Blew Up)
Inside Infosys, there was a fundamental cultural clash between Vishal Sikka CEO and Infosys co-founder N.R. Narayana Murthy:
Murthy's Ethos: Conservative financial management, modest compensation, proven business models. The values that built Infosys.
Sikka's Vision: Aggressive AI investment, Silicon Valley talent acquisition, fundamental business model transformation. What was needed to survive disruption.
By 2017, their public warfare forced a choice.
Murthy won. Sikka resigned.
6/ The Year Everything Changed: 2019
The critical inflection point came when OpenAI restructured from nonprofit to "capped-profit" model.
This was Infosys's last chance to convert their donor relationship into a strategic partnership.
But Infosys did nothing. They were consumed by Sikka-Murthy conflict and the new leadership had zero interest in AI partnerships.
Meanwhile, Microsoft turned Sikka’s thesis into action, secured the partnership of the century.
7/ How Microsoft Won Enterprise AI
Microsoft Invested $1B in 2019 (now ~$13B total) and negotiated exclusive partnership terms:
- OpenAI’s sole compute provider
- 49% profit share
- OpenAI IP rights for use in Microsoft products
- First access to new models
Result: Microsoft emerged as the enterprise-AI leader, with an AI annual revenue run-rate of ~$13B, and a (rumored) ~30% stake in OpenAI—about $150B at a $500B valuation.
If Infosys had doubled down in 2019, a $1B bet could be worth $45B+ today.
The nonprofit they donated to in 2015 is now worth about 4.3x their entire company.
Let that sink in.
9/ Conclusion: The Price of Moving Too Slow
Vishal Sikka’s tenure at Infosys is one of corporate history’s great what-ifs.
He arrived with a comprehensive plan to ready Infosys for the AI era: shift from labor arbitrage to knowledge automation, from projects to platforms, from cost to value, and he began rewiring the company to make that pivot real.
His 2017 departure did not just end a CEO’s term. It interrupted a transformation that could have positioned Infosys, and by extension Indian IT, to own the AI economy rather than rent it.
Today, India’s mass layoffs, skills gaps, and creeping commoditization are exactly the shocks his strategy was built to absorb.
In the end, Sikka drew the blueprint, Microsoft built it, and Infosys pays the rent.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
On ProofBench-Advanced—where models prove formal mathematical theorems—GPT-5 scores 20%. Gemini Deep Think IMO Gold hits 65.7%. DeepSeek Math V2 (Heavy) scores 61.9%.
That's second place—but Gemini isn't open source.
This is the best open math model in the world. And DeepSeek released the weights. Apache 2.0.
Here's what they discovered:
1/ Why Normal LLMs Break on Real Math
Most large language models are great at sounding smart, but:
- They’re rewarded for the final answer, not the reasoning.
- If they accidentally land on the right number with bad logic, they still get full credit.
- Over time they become “confident liars”: fluent, persuasive, and sometimes wrong.
That’s fatal for real math, where the proof is the product.
To fix this, DeepSeek Math V2 changes what the model gets rewarded for: not just being right, but being rigorously right.
2/ The Core Idea: Generator + Verifier
Instead of one model doing everything, DeepSeek splits the job: 1. Generator – the “mathematician”
- Produces a full, step-by-step proof.
2. Verifier – the “internal auditor”
- Checks the proof for logical soundness.
- Ignores the final answer. It only cares about the reasoning.
This creates an internal feedback loop:
One model proposes, the other critiques.
Battery storage is already scaling—159 GW deployed globally, 926 GW projected by 2033.
Renewables needed it first. Now AI needs it too.
Tesla is deploying Megapacks at data centers. China is deploying 30 GW this year, integrating storage directly into AI buildout.
Why? Data centers can’t scale without solving three problems:
- 7-year interconnection queues
- power quality GPUs demand
- backup without diesel permits
Batteries solve all three ↓
Why AI Data Centers Need Batteries
Interconnection is broken. Utility connection takes 7+ years. Batteries bypass it. Skip the queue.
GPUs break traditional power. Training loads swing 90% at 30 Hz. Batteries smooth it in 30 milliseconds.
Diesel doesn’t scale. Permitting is hard. For 20-hour backup, batteries are cost-competitive.
The math: ~1% of data center capex.
The Scale
Global capacity: 159 GW by end-2024. Up 85% from 86 GW in 2023. Projected: 926 GW by 2033.
Cost curve: $115/kWh in 2024, down 84% from $723/kWh in 2013. Still falling.
Economics flipped. Solar plus 4-hour storage runs ~$76/MWh. New gas peakers cost $80-120/MWh.
The universe isn’t just expanding — it’s speeding up
13.8 billion years after the Big Bang, astronomers expected gravity to slowly slow cosmic expansion. Instead, when they looked deep into space, they found the opposite: the universe is accelerating.
Whatever drives that acceleration makes up ~70% of the cosmos.
We call it dark energy.
We can measure it. We can see its effects. So what is it, really?
How we figured this out
Cepheid stars: the distance trick
Henrietta Leavitt discovered that certain stars (Cepheid variables) get brighter and dimmer with a regular period — and that period tells you their true brightness → lets us measure distance to faraway galaxies.
Redshift: galaxies on the move
Vesto Slipher used spectra of galaxies to show many had their light stretched to longer, redder wavelengths.
Redder → moving away faster.
Hubble & the expanding universe
Edwin Hubble and Milton Humason combined Cepheid distances with redshift and found a pattern:
>The farther a galaxy is, the faster it’s receding.
That’s the Hubble–Lemaître law: clear evidence that the universe is expanding.
The shock: expansion is accelerating
In the 1990s, two teams studied Type Ia supernovae, stellar explosions so consistent in brightness that they act like “standard candles.”
By comparing how bright they should be to how bright they look, you can get distance.
By measuring redshift, you get how fast they’re moving away.
The surprise:
• The supernovae were dimmer and farther away than expected.
• That only made sense if, over billions of years, the universe’s expansion had sped up instead of slowing down.
This cosmic acceleration is what we now attribute to dark energy.
🚨The White House just launched the Genesis Mission — a Manhattan Project for AI
The Department of Energy will build a national AI platform on top of U.S. supercomputers and federal science data, train scientific foundation models, and run AI agents + robotic labs to automate experiments in biotech, critical materials, nuclear fission/fusion, space, quantum, and semiconductors.
Let’s unpack what this order actually builds, and how it could rewire the AI, energy, and science landscape over the next decade:
1/ At the core is a new American Science and Security Platform.
DOE is ordered to turn the national lab system into an integrated stack that provides:
• HPC for large-scale model training, simulation, inference
• Domain foundation models across physics, materials, bio, energy
• AI agents to explore design spaces, evaluate experiments, automate workflows
• Robotic/automated labs + production tools for AI-directed experiments and manufacturing
National-scale AI scientist + AI lab tech as infrastructure.
2/ The targets are very explicit and very strategic.
Within 60 days, DOE has to propose at least 20 “national challenges” in: