SandemanStocks Profile picture
Feb 9 15 tweets 2 min read Read on X
$NBIS People are still thinking about AI infrastructure like it’s a normal cloud cycle. It’s not. We’re watching the early buildout of the physical backbone of the AI economy. And by 2030, that backbone could be worth trillions. For $NBIS a SP of $800 to $1900 in 2030 is a distinct possibility

Thread-->
Compute demand is compounding faster than almost any infrastructure buildout in modern history. Training clusters are getting larger. Inference demand is exploding. Physical AI is emerging. Robotics is coming online.
All of this runs on compute.
By 2030, the winning AI infrastructure companies won’t just be “cloud providers.” They’ll be industrial-scale compute utilities powering intelligence across the global economy.
That’s the category Nebius is building toward.
If Nebius executes, the story between now and 2030 is straightforward:
• Massive GPU deployment
• Power capacity expansion
• AI cloud revenue scaling
• Margin expansion from utilization
• Software and platform leverage
Infrastructure first. Profitability later.
Think about the scale shift.
Traditional cloud grew alongside the internet.
AI cloud grows alongside machine intelligence.
That’s a much steeper curve.
Let’s talk numbers.
If Nebius reaches something like $25B–$40B in annual revenue by 2030, that would place it firmly in the top tier of AI-native infrastructure providers.
Not impossible in a world where AI compute demand keeps doubling every few years.
Infrastructure companies at scale often trade between 8–12x revenue during strong growth phases.
That would imply something like:
$200B to $480B market cap potential.
Using about 250M shares outstanding, that translates to a 2030 stock price range roughly between:
$800 and $1,900 per share.
That sounds extreme...until you zoom out.
Seven years ago, hyperscale AI clusters barely existed.
Today, single deployments can involve tens of thousands of GPUs.
By 2030, million-GPU environments are realistic.
Infrastructure follows demand.
The biggest mistake investors make is assuming AI infrastructure growth will slow down.
But every new model, agent, robot, and enterprise AI system increases compute demand again.
AI builds on itself.
This is why AI infrastructure behaves differently than SaaS.
SaaS scales with users.
AI infrastructure scales with intelligence, and intelligence is becoming an industrial input.
$NBIS doesn’t need to dominate the market to win.
Even a small share of global AI compute could justify a valuation many multiples higher than today.
Between now and 2030, the key variables to watch are simple:
GPU deployment
Power capacity
Revenue growth
Utilization rates
Operating margins
Everything else is noise.
If AI adoption continues accelerating, the limiting factor won’t be demand.
It will be infrastructure.
That’s where Nebius lives.
2030 sounds far away, but in infrastructure cycles, it’s right around the corner.
The companies laying physical AI foundations today may define the next decade of markets.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with SandemanStocks

SandemanStocks Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Sandeman52

Oct 26, 2025
If you think AI data centers are here to just enhance Large Language Models (LLMs), you are sadly mistaking. LLMs are just the tip of the iceberg. Let’s dive a bit deeper, and see why AI data centers are in inning 1, game 1, of a full season of baseball. A 🧵 👇
The Dawn of the AI Infrastructure Era

The global surge in AI-oriented data center construction marks only the beginning of a multi-decade transformation in compute infrastructure. Today’s buildout is largely focused on supporting large language models, or LLMs, which are vast neural networks that learn from trillions of words and tokens to perform reasoning, summarization, and dialogue. These systems are extremely compute-hungry and require tens of thousands of high-end GPUs along with custom networking. This phase represents the foundation of what will evolve into a diversified, multimodal, and ultimately autonomous AI ecosystem. Just as the early internet built the backbone for cloud computing, the current LLM expansion is establishing the electrical, thermal, and architectural groundwork for a new class of intelligent infrastructure.
Stage One: The LLM Buildout

Over the past two years, major cloud providers and emerging players have invested enormous capital into AI-optimized data centers. These facilities are designed around dense GPU clusters, liquid cooling, and very high power throughput that often exceeds 50 to 100 megawatts per site. This buildout reflects the immediate demand to train and serve advanced LLMs such as GPT-4, Claude, Gemini, and Mistral. These models have transformed the global software landscape and created a powerful feedback loop. More users lead to larger models, which require more compute, which then drives new construction. Although LLM infrastructure is expanding at record pace, text-only models represent just the first stage in a much longer growth cycle.
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(