$NBIS is my largest position — and it’s finally starting to catch the attention of more investors.
I’ve been talking about it for months, so it's time to condense everything in a detailed thread.
Here’s why I believe $NBIS is one of the best opportunities in the market: 👇🏻🧵
1. Origins: From Yandex to Nebius Group
The story of $NBIS begins inside one of the most iconic tech companies to emerge from Eastern Europe: Yandex. Often dubbed the “Google of Russia”, Yandex was a digital powerhouse, dominating search, maps, ride-hailing, e-commerce, and AI in Russia and surrounding markets. At its peak, it was a $30B company and one of the most successful tech stories in the region.
Then, everything changed.
In early 2022, Russia’s invasion of Ukraine set off a geopolitical and moral reckoning for thousands of Yandex employees. Around 2,000 engineers, product leaders, and researchers — along with key members of Yandex’s founding team — made a bold decision: they would not be complicit. They chose to walk away from the Russian business, even if it meant leaving behind their homes, careers, and in some cases, their families.
This moral stand set in motion one of the most complex corporate restructurings in recent memory. Over the following two years, Yandex navigated sanctions, shareholder pressure, and mounting political scrutiny to ultimately divest all of its Russia-based assets. By mid-2024, a formal separation was completed.
And that’s how $NBIS was born.
Today, $NBIS is a completely independent company — composed of Nebius (AI cloud), Avride (autonomous driving), Toloka (data labeling), TripleTen (edtech), and a 28% stake in ClickHouse, the fast-growing open-source database platform. The group is legally and operationally severed from Yandex’s Russian operations. Its leadership and board have acquired Dutch or Israeli citizenship, ensuring full compliance with international sanctions and signaling a clean break from its origins.
While $NBIS inherits Yandex’s deep engineering DNA — particularly in AI infrastructure, distributed systems, and cloud computing — it is now pursuing a fundamentally different mission: to become a global leader in AI cloud services.
But with AI dominating headlines and investor attention, how is NBIS still flying under the radar?
The answer lies in its unconventional path to the public markets. Due to its roots within Yandex, $NBIS bypassed the traditional IPO process entirely. There was no roadshow, no investor marketing, and virtually no institutional coverage.
In fact, according to the company’s Founder and CEO, they were caught off guard by the listing. On a Friday, they received a call from Nasdaq informing them that Yandex’s legacy listing would transition to Nebius on Monday — just three days later. The team had to scramble to meet compliance requirements and prepare investor materials with almost no advance notice.
As a result, the stock debuted with minimal visibility. Analyst coverage is still limited, and a large portion of the float remains in the hands of retail investors.
This disconnect — between the quality of the business and its lack of market visibility — is precisely what makes $NBIS one of the most compelling and asymmetric opportunities in tech today.
As the company continues to execute on its ambitious expansion plans, it's likely only a matter of time before the broader market takes notice.
2. Core Business Explained
The AI revolution is accelerating, pushing the boundaries of what’s technologically possible — but also exposing the severe limitations of today’s compute infrastructure. As demand for AI capabilities explodes, the need for purpose-built, scalable, and efficient compute infrastructure has become one of the most urgent bottlenecks in the tech industry.
$NBIS exists to help solve that.
Positioned at the cutting edge of the global AI infrastructure market, $NBIS is building the backbone of tomorrow’s AI economy. Its mission is to deliver the infrastructure, tools, and services required to support AI innovation at scale. With ambitions to scale its operations to thousands of megawatts of GPU compute capacity, $NBIS is enabling startups, enterprises, and researchers alike to build, train, and deploy cutting-edge AI models — all on a single integrated platform.
At its core, $NBIS is a next-generation AI infrastructure company, often referred to as a “neocloud.” Unlike traditional cloud providers that retrofitted their platforms for AI, Nebius was purpose-built from the ground up for AI workloads. It combines deep expertise in hardware and software development with large-scale GPU deployments to deliver a full-stack solution designed specifically for the demands of modern AI development.
A Three-Layered Architecture: Infrastructure, Platform, and Applications
$NBIS operates across three primary layers — infrastructure, platform, and applications — creating an end-to-end ecosystem that addresses every step of the AI development lifecycle.
1. Comprehensive AI Infrastructure
Unlike many cloud providers that rely on off-the-shelf hardware and outsourced services, Nebius controls the entire value chain of its infrastructure. This full-stack control translates into both performance gains and economic efficiency:
• Custom Data Centers: Engineered for energy efficiency and high-density compute, Nebius’ data centers allow for better unit economics and scalability.
• In-House Server Design: Servers are custom-built to optimize GPU utilization and deployment speed — beyond the GPU itself — giving Nebius a competitive edge in cost and performance.
• End-to-End Stack: From hardware manufacturing to cloud orchestration, Nebius owns every layer, enabling tight integration, faster innovation cycles, and better cost controls.
• Managed Services: Tools like Apache Spark, MLflow, and others are seamlessly integrated, letting users focus on development rather than infrastructure management.
2. AI-Centric Cloud Platform
At the platform level, $NBIS has developed an AI-native cloud computing environment tailored to the needs of ML and DL practitioners. The platform integrates large-scale GPU clusters, scalable object storage, and managed tools into a cohesive offering:
• AI-Optimized Compute: Support for training, fine-tuning, and inference on some of the most advanced GPUs available, including NVIDIA H100s, H200s, and the upcoming Blackwells.
• Elastic Scalability: Whether it’s a single experiment or a massive training run, users can scale their compute resources up or down with ease.
• Low Latency & High Reliability: Proprietary cloud software and in-house hardware design ensure minimal downtime and consistent performance under load.
This cloud environment gives AI developers everything they need to build and deploy models in one place — with significantly less friction compared to general-purpose cloud services.
3. AI Studio: Inference-as-a-Service
On top of its platform and infrastructure stack, $NBIS offers AI Studio, a SaaS environment that simplifies access to powerful open-source AI models via APIs. Designed for both researchers and commercial users, the AI Studio enables fast, cost-efficient deployment of foundational models across a range of use cases:
• Plug-and-Play AI: Integration with popular models like ChatGPT, Gemini, LLama, Mistral, Qwen, DeepSeek, and others.
• Wide Model Coverage: From text generation and image synthesis to embedding models for Retrieval-Augmented Generation (RAG) systems, AI Studio supports a broad spectrum of AI applications.
• Fast Model Onboarding: Nebius can integrate trending models within days or even hours — exemplified by its rapid deployment of DeepSeek models.
• Market-Leading Cost Efficiency: One of the lowest price-per-token (if not the lowest) offerings for inference currently available.
This platform is particularly well-suited for customers who don’t want to manage complex infrastructure but still want to experiment, prototype, or deploy AI applications at scale.
Today, $NBIS is one of the largest non-U.S. providers of AI infrastructure. The global market for GPUs is intensely competitive: while the majority of AI chips are used internally by model developers like OpenAI, Google, Microsoft, and Meta, only a fraction — around 10–20% — are made available to public cloud users via hyperscalers such as AWS or Oracle.
The remaining 30–40% of GPU supply is fragmented across dozens of alternative players — but only a handful, including CoreWeave, Lambda Labs, Together(.)ai, Deep Infra, and Nebius, possess the technical and financial capabilities to deploy large-scale infrastructure and serve global demand. Among these, Nebius stands out for its ability to serve a wide spectrum of clients, from startups and research labs to enterprise customers and AI product builders.
As its Founder & CEO, Arkady Volozh, puts it:
“We are one of the few alternatives capable of serving the core needs of major players, supplying GPUs to startups and corporate clients looking to purchase infrastructure, and supporting smaller clients who use models deployed by us. We are giving customers the freedom to choose.”
3. Competitive Advantages
One of the most common questions customers ask $NBIS is: “How are you different from other cloud providers?” The answer lies in a combination of deep engineering talent, reliability, customization, and an obsessive focus on customer experience.
From day one, $NBIS set out to rebuild the cloud infrastructure stack from the ground up — and did so in just 12 months. This rapid execution was made possible by its world-class engineering team, many of whom previously worked at Yandex and followed the founding team to help realize a bold new vision: build an AI-native cloud platform that goes far beyond traditional compute reselling.
Here’s what sets it apart in the booming AI infrastructure market.
$NBIS is built by engineers, for engineers. With nearly 400 AI/ML and cloud infrastructure engineers with over a decade of experience in the field (and a broader tech team of ~850 professionals), the company brings a level of technical depth rarely found outside the top-tier hyperscalers. This allows $NBIS to innovate rapidly, adapt to new hardware generations, and maintain full control over its stack — from server design to orchestration software.
While many cloud providers depend on third-party OEMs like Dell or Supermicro for servers, Nebius designs its own racks and servers in collaboration with NVIDIA. These custom designs are forward-compatible with next-generation GPUs, reducing hardware obsolescence risk and improving performance per watt. Sourcing hardware from Taiwanese ODMs, Nebius bypasses traditional markups and optimizes its data centers for both speed and efficiency.
The result:
• Better thermal management and liquid cooling
• Faster deployments of new GPU models
• Lower downtime and failure rates
• Structural cost advantages — some of which are passed on to customers
This technical advantage is not just theoretical. It translates directly into a more stable, responsive cloud environment that supports high-stakes workloads like large-scale model training and real-time inference.
2. Reliability at Scale
In AI development, reliability is everything. A single node failure in a cluster training a large model can delay progress by hours — or worse, corrupt the run entirely. $NBIS has made reliability a central design principle.
To mitigate failure at every layer, $NBIS builds "auto-healing" infrastructure with fault tolerance baked into every component:
• Hardware: Custom-built servers tested in-house
• Monitoring: Real-time anomaly detection and predictive failure alerts
• Compute management: Fine-grained resource orchestration
• Kubernetes orchestration: Automated recovery at the container level
• Inference services: Built-in failover to ensure continuity
This robustness enables customers to train and deploy models with minimal interruptions — a critical factor for workloads that can run for days or weeks at a time.
The deeper customers go into the $NBIS stack, the more seamless and “auto-magical” the experience becomes. This reliability is a key differentiator, particularly as inference workloads begin to scale rapidly across industries.
3. A Customer-Obsessed Cloud
Another factor that truly sets Nebius apart is its customer-first mindset.
Rather than building a general-purpose cloud for generic workloads, $NBIS has always focused on the needs of AI developers. The company works closely with customers to understand their goals — whether training a foundation model, scaling multi-modal inference, or optimizing latency-sensitive apps — and delivers infrastructure that fits those use cases.
Crucially, this level of tailored service is a major differentiator. It’s exactly one of the advantages that makes $NBIS so attractive — not only to small, AI-native startups that need deep customization from day one, but also to medium and large companies seeking flexible, high-performance infrastructure.
This philosophy extends into product design. Nebius doesn’t just sell GPU time, it builds features customers didn’t know they needed — like its own LLM, which continuously tests the platform and reports back optimization opportunities. This in-house AI ensures that infrastructure keeps evolving in line with user needs.
“We’re a very customer-centric cloud. The cloud experience matters to us and is a top priority almost every day. We want to build a whole AI stack. We try to determine what products our customers will need next — and deliver them first.”
This mindset drives Nebius’ roadmap and underpins new offerings like its AI Studio, which simplifies access to open-source models and offers one of the lowest cost-per-token inference platforms on the market.
4. Industry-Leading Cost and Energy Efficiency
Despite offering high-performance infrastructure, $NBIS maintains pricing that’s often 20–25% lower than the average GPU provider — a result of its vertical integration and optimized operations. Key drivers of this efficiency include:
• Full-stack ownership: Controlling everything from server design to deployment
• ODM partnerships: Avoiding OEM markups and customizing hardware for AI workloads
• Data center efficiency: Operating with a Power Usage Effectiveness (PUE) of ~1.13 (same class as Microsoft and Google, and even better than Oracle, Alibaba, IBM, etc)
• Top 5% supercomputing efficiency globally, based on energy use per unit of compute
• In-house software: Avoiding costly third-party licenses and improving automation
These cost savings don’t just benefit $NBIS — they’re passed on to customers, making high-performance compute accessible to more startups, labs, and enterprises globally.
Importantly, all of this positions the company among the most energy-efficient AI cloud providers in the world.
How Are Nebius' Data Centers So Efficient?
$NBIS' data centers can achieve optimized cooling and heat recovery to enhance both cost-effectiveness and environmental sustainability.
Optimized Cooling
Its Finland-based data center employs free cooling, eliminating the need for traditional chillers, water, and refrigerants. This approach not only reduces costs but also minimizes the environmental footprint.
• Higher Operating Temperatures: The data center operates at a maximum temperature of approximately 40°C, which is about 10°C higher than the typical limit set by standard hardware. This higher threshold eliminates the need for inlet air subcooling and enables slower, less energy-intensive airflows compared to conventional designs.
• Broader Workload Range: The center functions effectively under 100% workload within a temperature range of 18°C to 40°C, in contrast to most data centers that aim to stay below 27°C due to server architecture constraints. This capability eliminates the energy demands of subcooling, significantly improving energy efficiency.
• Energy Savings: By operating without subcooling requirements, the data center achieves substantial energy savings, aligning with $NBIS' focus on cost efficiency and sustainability.
Heat Recovery
In addition to optimized cooling, $NBIS has implemented an advanced heat recovery system that repurposes waste heat for municipal heating, creating additional value for the surrounding community.
• Regional Innovation: The Finland data center is a pioneer in the region, using server-generated heat to meet local heating needs.
• Energy Reuse: Between 2020 and 2023, the center reused over 80,000 MWh of server heat for municipal heating, equivalent to the energy consumed by around 2,500 Finnish households for heating over four years.
• Meeting Heating Needs: More than 50% of the annual heating requirements of the nearby town were covered by this heat recovery system.
• Cost Savings for Households: The system contributed to household heating cost reductions of up to 12%, accounting for approximately 30% of the data center’s electricity costs.
The company plans to replicate these best practices as it expands its data center capacity, positioning it as a leader in sustainable technology infrastructure. These initiatives not only generate cost savings but also create long-term value for both $NBIS and the communities it serves.
In a market that’s moving fast and expanding faster, $NBIS isn’t just keeping up — it’s setting the standard for what AI-native infrastructure should look like. This blend of vertical integration, performance, reliability, cost-efficiency, and long-term vision makes $NBIS one of the most compelling players in the global AI infrastructure race.
4. Competitive Advantages – Customer Stories
The most practical way to understand $NBIS' competitive edge is by listening directly to its customers. Their experiences offer firsthand insight into what makes the platform stand out — not just on paper, but in real-world deployment.
Interview with Higgsfield AI
Higgsfield AI, one of the top AI video startups, chose Nebius after evaluating several GPU cloud providers. According to its Founder:
• $NBIS is extremely start-up friendly. Developers often express frustration with larger cloud providers’ lack of clarity around pricing and usage terms. Nebius, on the other hand, offers full transparency and fair, predictable pricing — especially valuable for early-stage companies with dynamic workloads.
• Flexible consumption, no rigid quotas. Unlike hyperscalers that demand yearly GPU commitments, $NBIS offers spend-based discounts. This enables start-ups to scale without overcommitting.
• Enterprise-grade infrastructure. Despite being a newer player, Nebius delivers infrastructure and uptime reliability on par with Google Cloud and AWS. Importantly, hyperscalers typically aren't interested in start-ups — they prefer higher-volume customers.
• Token-based pricing and a powerful AI Studio. Developers can access and run a wide range of models — including ChatGPT and DeepSeek — all within a streamlined platform.
• Exceptional support. “The Nebius team is always available. Compared to other providers, their responsiveness is just on another level.”
• Simple, short contracts. While many providers require dense 200-page legal documents, $NBIS keeps contracts brief and transparent — sometimes with just 3-5 pages.
$NBIS provides a level of customization and flexibility that no other cloud provider offers.
Interview with Dylan Patel, Founder of SemiAnalysis
As an industry analyst, Dylan Patel offers a high-level view of where $NBIS fits in the cloud ecosystem:
• The cloud market is fragmenting. Today’s landscape includes hyperscalers, neoclouds, and specialized compute providers. Each has its own stack and engagement model — bare metal, Slurm, Kubernetes, APIs, inference services, and more. However, Nebius supports the full spectrum. While many providers focus on just one layer of the stack, $NBIS is built to serve across use cases — from researchers to production teams — making it uniquely adaptable.
• Engineering talent is a core advantage. $NBIS designs its own racks, partners directly with NVIDIA, and builds systems with next-gen hardware in mind — avoiding obsolescence even as GPU architectures evolve rapidly.
• Custom infrastructure = better economics. By sourcing from Taiwanese ODMs and optimizing cooling systems in-house, $NBIS achieves higher performance at lower cost — savings it can pass to customers.
Interview with Lynx Analytics
Lynx Analytics is a global AI firm working across life sciences. With customers like Genentech and AstraZeneca, their workloads demand both precision and compliance.
• $NBIS enables fast and scalable GPU clusters. This was essential for accelerating matrix-heavy computations used in clinical and R&D workflows.
• Better reliability. Lynx suffered multiple data losses with a prior provider — an issue eliminated with $NBIS.
“We lost our data twice in seven months with our previous provider. We needed something more stable, scalable, and GPU-compatible. The ability to run Spark on GPU clusters and connect CPUs and GPUs in the same environment was crucial. Nebius checked all those boxes.”
• GDPR-compliant infrastructure. Nebius' European clusters ensured data privacy and compliance for sensitive experiments.
• Better user experience: “In my experience, AWS works — but I dread opening the UI. You always have to edit some JSON file just to get basic things going. Nebius is different. It was designed from the ground up for AI workloads. It’s simpler, more intuitive, and overall just more enjoyable to use.”
Interview with CentML
CentML is focused on cost- and performance-optimized infrastructure for large-scale AI and computer vision workloads.
• Bare-metal access. CentML runs its own stack directly on $NBIS hardware, unlocking maximum efficiency.
• Consistent performance. Many newer providers fail to deliver reliable GPU uptime. $NBIS stood out for its stability and responsiveness.
• Early access to NVIDIA hardware. $NBIS customers gain access to cutting-edge GPUs before they’re widely available — allowing teams like CentML to stay ahead of the curve.
• Flexible contracts. No need to lock into year-long reservations. CentML appreciated Nebius’ short-term and on-demand pricing options that didn’t sacrifice affordability.
“Nebius is the best performance-per-dollar solution currently available in the market.”
Interview with KissanAI
KissanAI is reshaping agriculture through AI, helping agribusinesses better serve farmers with language-specific and literacy-aware models.
• Reliable, accessible compute. As a bootstrapped startup, KissanAI needed an infrastructure partner that could offer enterprise-grade resources without high upfront costs. $NBIS delivered.
• Training and inference on one platform. This simplified operations and removed the need to split workloads across providers, saving the team both time and resources.
• Intuitive and user-friendly experience. Even non-technical users could deploy and manage resources easily.
• Best-in-class pricing, along with added services and features that typically come at a premium with other providers. Combined with fast, responsive support, $NBIS became a critical enabler of KissanAI’s mission (unlike other providers, where support is often slow or unresponsive).
Interview with Captions
Captions develops AI tools for creative video storytelling. Its flagship foundation model, Mirage, required stable, large-scale GPU infrastructure to train effectively.
• Significantly improved reliability. $NBIS enabled smoother model training, removing the debugging headaches that plagued earlier runs.
• Proactive support. The team’s responsiveness was crucial — especially during scaling and testing.
• Global reliability. With a team based in New York, having around-the-clock support ensured uptime and peace of mind.
“We know we can trust them. Having a partner we can truly rely on has simplified our lives. We know that if something crashes in the middle of the night, their team is there to quickly recover it.”
Other Customer Stories
Beyond these core interviews, many more customers are benefiting from Nebius’ AI-first infrastructure:
Chatfuel: Leverages Nebius’ infrastructure and Llama-405B models to improve chatbot performance, optimizing training costs and real-time efficiency while enabling quick deployment.
London Institute for Mathematical Sciences (LIMS): Uses Nebius’ scalable, reliable infrastructure for advanced research on LLMs, enhancing model training and data processing capabilities.
Positronic Robotics: Uses Nebius' virtual machines with NVIDIA H100 GPUs to train AI models for robotic control systems, aiding in the development of intelligent cleaning robots.
SynthLabs: Partners with TractoAI (owned by Nebius) to simplify training infrastructure, accelerating the release of the Big Math dataset and improving model training using high-end GPUs.
Krisp: Reduced model training time by 50–80% by switching to Nebius’ NVIDIA H100 GPUs, enhancing AI models for noise cancellation and speech recognition.
Dubformer: Relies on Nebius for AI dubbing and localization, handling vast audio datasets and ensuring continuous 24/7 model training for improved efficiency.
Unum: Streamlined the training of multimodal models and successfully open-sourced several models, advancing research in compact AI models.
TheStage AI: Optimizes inference performance with Nebius’ GPU instances, ensuring scalability and reliability for model evaluation and deployment.
Recraft: Uses Nebius to train a generative AI model with 20 billion parameters, overcoming challenges and achieving benchmark-breaking performance in AI design.
Converge Bio: Uses Nebius to accelerate AI-driven drug discovery by training LLMs on proprietary biomedical datasets. What used to take weeks is now done in days.
$NBIS is clearly a standout choice for developers, and it’s easy to understand why. Their focus on flexibility, reliability, cost-efficiency, customer experience, and top-notch customer support sets them apart in a crowded field. By building their platform from the ground up specifically for AI, they avoid the pitfalls of retrofitting older cloud technologies. This approach allows them to deliver exactly what AI developers need, whether startups looking for fair terms and transparency or enterprises seeking cutting-edge infrastructure.
Their early adoption of NVIDIA’s latest innovations, combined with their commitment to engineering excellence, ensures developers always have access to the best tools available. Plus, their customer-centric mindset means they’re not just selling compute power — they’re partnering with developers to help them succeed.
$NBIS proves time and again that they’re one of the go-to choices for anyone serious about AI development.
5. NVIDIA: Client, Partner, and Shareholder
Few companies in the AI infrastructure space can claim NVIDIA as a client, partner — and shareholder.
$NBIS is one of them.
According to NVIDIA’s most recent 13F filing, the company holds ~1.2M shares of $NBIS, underscoring a strategic relationship that extends far beyond a typical vendor-customer dynamic.
A Longstanding Strategic Partnership
The roots of the relationship go back over a decade. Yandex — Nebius’ predecessor — was historically NVIDIA’s largest customer outside the U.S. and China, laying the foundation for deep technical and operational integration. That collaboration has only strengthened post-spinout.
Today, NVIDIA and $NBIS are strategically aligned across three fronts:
1. Product Collaboration
$NBIS is one of the first cloud providers globally — and the first in Europe — to gain early access to NVIDIA’s most advanced hardware, including the Blackwell and Blackwell Ultra GPU platforms. This access allows Nebius to build systems tailored for the next generation of agentic and reasoning-based AI.
2. Platform Integration
The Nebius platform is built from the ground up using NVIDIA’s accelerated computing stack — from A100 and H100 to H200 and B200 — giving customers full-stack reliability for both training and inference.
3. Joint Go-to-Market and Ecosystem Participation
In June 2025, $NBIS was named a Reference Platform NVIDIA Cloud Partner, joining an elite group of infrastructure providers offering validated, regionally impactful AI cloud services. This designation brings tighter integration across NVIDIA’s hardware, software, and deployment playbooks, with benefits including:
- Access to pre-validated reference architectures,
- Seamless deployment of NVIDIA-optimized clusters,
- Enterprise-grade reliability and consistency across global data centers.
Next-Gen Hardware Access
Thanks to its NVIDIA partnership, $NBIS is able to move faster than most competitors when it comes to integrating cutting-edge hardware:
• Blackwell Platform (2025): $NBIS will be the first European cloud provider to offer NVIDIA’s energy-efficient Blackwell GPUs to customers, enabling higher performance per watt and lower total cost of ownership for large-scale AI workloads.
• Blackwell Ultra & GB300 NVL72: $NBIS will also be among the early adopters of the NVIDIA GB300 NVL72 — a 72-GPU superchip system announced by Jensen Huang during his keynote at the last GTC conference. These instances are purpose-built for AI agents, multi-modal reasoning, and physical simulation workloads, and will be available to Nebius customers by Q4 2025.
To support the rollout of these next-gen systems, $NBIS is making significant infrastructure investments:
• New Jersey Data Center: The company’s largest data center (under construction), with capacity up to 300 MW and fully dedicated to Blackwell’s architecture.
• Kansas City Cluster Expansion: Originally built with thousands of Hopper GPUs, this flagship U.S. cluster will be upgraded with NVIDIA HGX B200 systems to support high-throughput AI workloads with minimal latency.
Once complete, these two sites will position $NBIS as a major AI infrastructure provider in North America.
NVIDIA Dynamo & the Software Stack
Beyond hardware, $NBIS is also working with NVIDIA on the software layer. The company recently joined the ecosystem of partners for NVIDIA Dynamo, an open-source inference framework built to streamline and scale GenAI deployment across distributed clusters.
This move allows Nebius to:
• Offer low-latency inference with dynamic model scaling
• Simplify model deployment pipelines for customers
• Drive better performance at the orchestration and runtime level
These types of collaborations are key to differentiating $NBIS not just as a hardware provider, but as a full-stack AI platform.
All in all, the relationship with NVIDIA gives $NBIS:
• Hardware priority (including pre-release access to chips),
• Engineering collaboration (rack design, thermals, reliability),
• Ecosystem inclusion (Dynamo, Partner Network),
• And most importantly: credibility in a hyper-competitive space.
➡️ This article is over 11,000 words, so reading it as a thread might be a bit overwhelming.
Make sure to check the link in my bio for a cleaner version.
And don’t forget to subscribe to my newsletter — I share articles like this every week!
6. Nebius vs. CoreWeave
In recent months, anticipation around CoreWeave’s IPO initially served as a potential catalyst for $NBIS, given the companies’ shared focus on AI-focused cloud infrastructure. That narrative, however, took a sharp turn as market conditions changed.
$CRWV had initially aimed to capitalize on the AI frenzy by targeting a $35B valuation. But as macro sentiment weakened, the company was forced to revise its valuation down to $23B — and even then, the IPO was undersubscribed by institutional investors. The lackluster reception raised broader concerns for comparable players, including $NBIS, and sparked investor questions about the near-term demand outlook for neoclouds.
Further compounding skepticism were emerging concerns about CoreWeave’s capital structure — specifically, its heavy debt load — and its significant revenue concentration, with Microsoft accounting for over 60% of 2024 revenue. While these concerns are legitimate, they do not apply to $NBIS.
Below is a breakdown of the key strategic differences between these two companies — differences that will likely define their respective long-term trajectories.
As we discussed before, a core differentiator for $NBIS is its deep bench of engineering talent and full-stack control of its infrastructure. While both companies invest in GPU clusters, $NBIS designs its own hardware, including racks purpose-built in collaboration with NVIDIA for next-gen chips.
Engineering capability is often underestimated, but it is the cornerstone of durable infrastructure efficiency. The quality of Nebius’ hardware design will matter more over time as generational leaps in GPU performance compress replacement cycles and make poor hardware decisions more costly.
2. Infrastructure Model: Mostly Reseller vs. Vertically Integrated Operator
CoreWeave doesn’t own its infrastructure assets apart from GPUs, it focuses on renting colocation space (using third party data centers) to scale rapidly. While this accelerates go-to-market speed, it comes at a cost: greater reliance on third parties, limited hardware control, and increased exposure to power, cooling, and connectivity constraints in shared data centers.
$NBIS, by contrast, pursues vertical integration:
• It builds and owns most of its infrastructure.
• It designs hyperscale data centers from the ground up, with AI workloads in mind.
• It optimizes rack-to-chip performance and power utilization holistically.
This control over infrastructure allows $NBIS to scale incrementally and efficiently while mitigating external risks. By owning the stack, it also avoids the rent escalations and logistical bottlenecks that often plague GPU resellers.
The result: better uptime, lower latency, and reduced cost per unit of compute — benefits $NBIS can either retain as margin or pass to customers.
3. Growth Strategy: Disciplined Scaling vs. Debt-Fueled Expansion
By designing and building its own data centers, $NBIS can fully leverage its hardware optimizations. More importantly, its modular architecture allows it to scale capacity gradually, based on demand — a stark contrast to CoreWeave’s aggressive expansion.
CoreWeave rushed to build capacity as fast as possible to secure large customers like Microsoft. While this strategy enabled rapid growth, it also left the company burdened with over $8B in debt. This gets even more riskier when that debt is secured against its GPUs, a fairly fast depreciating asset that brings high financing costs.
In contrast, $NBIS follows a pragmatic and financially disciplined approach, avoiding overbuilding infrastructure that could remain underutilized. With $2.4B in cash and no debt, the company is in a far stronger financial position. While $NBIS does plan to raise additional capital to support its expansion, the exact approach remains uncertain. The company could opt for share dilution, potentially through a private placement with strategic investors, as it did with Nvidia in December. Alternatively, it could raise capital via debt financing or by selling a stake in one of its subsidiaries. Regardless of the method, any new funding is expected to be strategic and beneficial for long-term growth, especially as it would mean that demand continues to boom (otherwise, $NBIS wouldn’t need more than its current cash pile).
This approach reduces downside risk, especially in a capital-intensive sector where missteps can be existential. CoreWeave, on the other hand, is far more vulnerable to market disruptions.
4. Customer Base and Revenue Concentration
CoreWeave relies heavily on large-scale AI clients, with Microsoft accounting for 62% of its revenue in 2024. This dependence on a single customer introduces significant risks, especially as Microsoft ramps up its own infrastructure investments.
$NBIS, on the other hand, has a more diversified approach, targeting a broad range of customers, from small startups and developers to large enterprises. This gives the company a more stable revenue base, as it’s not reliant on one anchor client. For smaller businesses, $NBIS offers flexible, scalable AI cloud solutions, enabling them to access powerful compute resources without large upfront costs. At the same time, $NBIS also provides customized, high-performance infrastructure for larger enterprises that need tailored AI solutions.
Additionally, and as explained before, $NBIS focuses on developer experience, with tools like Nebius AI Studio simplifying AI workload deployment. This customer-centric approach makes it appealing to both emerging companies and established businesses, strengthening its long-term market position while reducing exposure to risks from any single client. In contrast, CoreWeave’s focus on GPU-centric infrastructure for high-end workloads limits its customer pool to a narrower segment, mainly large enterprises with substantial budgets and specific needs.
Both Nebius and CoreWeave are positioned to benefit from the AI infrastructure boom, but they are executing fundamentally different playbooks.
7. Current Numbers and Expansion Plans
As I said, $NBIS is entering its explosive growth phase with a rock-solid balance sheet: over $2B in cash and zero debt. This financial strength gives the company the flexibility to aggressively scale while maintaining operational stability. Still, with large-scale expansion on the horizon, additional capital raises are likely — though any such moves are expected to be strategic.
Explosive ARR Growth
$NBIS has delivered exceptional growth in Annual Recurring Revenue (ARR), skyrocketing from $21M at the end of 2023 to an expected $220M+ as of March (earnings next week) — a 10x increase in just fifteen months. This performance has been driven by:
• Customer Growth: Active clients surged from around 10 to over 40 (as of December), with continued momentum as the company expands into new verticals.
• Massive Capacity Ramp-Up: GPU capacity scaled from ~2,000 to over 35,000 units, unlocking significantly more workload throughput (and with plans to expand well beyond this).
• Customer Scaling: Existing customers are rapidly increasing their usage, indicating strong product-market fit and growing compute needs.
2025 Outlook
$NBIS aims to maintain its momentum with the following 2025 targets:
• ARR: $750M to $1B (I believe they might even achieve $1B+, considering that its GPU capacity has been sold out and their expansion plans point to a capacity large enough to surpass that guidance)
• Revenue: $500–700M, with Adj. EBITDA turning positive
• CapEx: $600M to $1.5B, primarily allocated to NVIDIA GB200 GPUs and to building new data centers across owned, colocation, and greenfield sites in Europe and the U.S.
Medium-Term Vision
Beyond 2025, $NBIS is positioning to generate multibillion-dollar annual revenue. Key growth drivers include:
• AI Cloud & GPUaaS Market Expansion: Capturing more share as demand surges for scalable AI compute.
• Continued Infrastructure Investment: Ongoing data center builds and GPU procurement to support future workloads.
• Customer Base Diversification: Growing across enterprise, mid-market, and developer segments.
• Software Product Innovation: Expanding high-margin services like API access to open-source models and turnkey AI tools. So far, these value-added services have been given for free to its customers, helping to attract demand. However, as the company deepens its relationship with customers, its software solutions are expected to become a high-margin revenue source.
Global Expansion Strategy
In the U.S., $NBIS is shifting into high gear. According to its CEO Arkady Volozh:
"The majority of AI consumption is happening in the U.S., which is why we’re building our sales and marketing structure there. We’ll continue adding hundreds of megawatts in Europe, but we’ll be expanding much more aggressively in the U.S."
To that end:
• New Jersey Data Center: A custom-designed facility scalable to 300 MW, with the first phase online by summer 2025. This alone will exceed Nebius' initial 100 MW U.S. capacity target for the year.
• Kansas City: A second deployment phase will be delivered by Q2 2025.
In Europe, investments are equally ambitious:
• $1B+ Infrastructure Program: Includes the tripling of the flagship Finnish facility (to 75 MW, up to 60,000 GPUs), new greenfield developments, and colocation builds like the new colocation in Iceland and the Paris GPU cluster — one of the first in Europe to offer NVIDIA H200s and, in the future, the new Blackwell platform.
And in Southeast Asia, Singapore is emerging as a strategic hub, where $NBIS will seek to build a new data center as well.
“Singapore’s location, business-friendly climate, and AI-forward policies give us a strong base to help local ecosystems grow. We’re partnering with governments and enterprises to support AI adoption and unlock regional opportunities.”
$NBIS is executing one of the most aggressive and well-funded expansion plans in the AI infrastructure space — with disciplined capital allocation, strong demand signals, and global momentum.
Just a few months ago, its medium-term power capacity target was 240+ MW. Now, the company has already secured over 400 MW — and it’s not planning to stop there:
“With New Jersey, we now have secured expansion capacity to over 400 MW. And we are actively reviewing options to extend this pipeline further as we seek to grow aggressively to multiples of where we are today.”
Its ability to scale both compute and revenue, while entering new markets and maintaining high performance, suggests the company is on track to become a top-tier global hyperscaler in the AI era.
8. Market Opportunity: A Large And Rapidly Expanding TAM
The total addressable market for AI infrastructure is witnessing explosive growth, and $NBIS is well-positioned to capitalize on this trend. According to its internal estimates:
• The TAM is projected to grow from $33B in 2023 to over $260B by 2030, representing a CAGR of 35%. This exponential growth is fueled by the increasing adoption of AI across industries and the rising demand for compute-intensive solutions tailored to AI workloads.
• A key driver of this demand will be inference workloads, which are expected to constitute 64% of AI server spending by 2027, up from 34% in 2023. As AI applications transition from development (training) to deployment (inference), $NBIS' comprehensive AI infrastructure is uniquely suited to meet these evolving needs.
External factors contributing to this opportunity include the rapid expansion of GPU-as-a-Service and AI cloud markets, which are anticipated to grow eightfold over the next seven years. This creates an immense runway for growth for infrastructure providers like $NBIS.
Internally, $NBIS is leveraging its competitive advantages to capture this market opportunity. As the generative AI market grows, $NBIS is not just keeping pace but actively shaping the ecosystem with its focus on high-performance, cost-efficient, and sustainable AI infrastructure.
IMO, the company is primed to capture a significant share of this booming market.
9. Bear Cases Addressed
The DeepSeek Panic: Is Demand for AI Infrastructure Slowing?
When DeepSeek was released, $NBIS plunged nearly 40% in a single session. The market panicked, believing DeepSeek’s surprisingly low development costs signaled a drastic decline in future infrastructure needs. The fear? That building AI models had suddenly become cheap — and the world wouldn’t need nearly as much GPU compute or cloud infrastructure.
Every AI infrastructure stock took a hit. But less than three weeks later, $NBIS more than doubled, hitting an all-time high above $50/share. Why?
First, the initial concerns were overblown. DeepSeek’s claims about development costs were soon challenged. As Big Tech earnings rolled in, they reaffirmed multi-year CapEx plans for AI infrastructure, highlighting a long-term trend — not a short-term bubble.
Second, even if compute becomes more efficient, demand may rise because of that — not in spite of it. This is a classic example of Jevons’ Paradox: when a resource becomes cheaper or more accessible, consumption tends to increase. In other words:
Cheaper infrastructure = faster AI adoption
That’s bullish for $NBIS. Their platform caters especially well to smaller AI developers, and they’ve already integrated DeepSeek and every major open-source LLM released since. Developers can choose the model they want and run it seamlessly.
Bottom line? The DeepSeek panic was a massive overreaction.
At NVIDIA GTC 2025, Jensen Huang projected that global data center CapEx would exceed $1T by 2028, doubling from 2024 levels. He and Sam Altman both emphasized that new models require ~100x more GPU power than previous ones. That’s not a slowdown — that’s an inflection point.
“CapEx from hyperscalers might be slowing — will that hurt Nebius?”
This is a common concern. But it’s important to separate hardware spend from compute demand. Even if global CapEx moderates in the short term, the underlying demand for compute continues to grow. AI infrastructure isn’t like traditional IT — it’s not cyclical in the same way. $NBIS expands based on real, observable demand, giving it flexibility to pace growth without overextending.
Meanwhile, competitors like CoreWeave are much more vulnerable due to their high debt loads and customer concentration. Even small shifts in interest rates or AI spending could put serious pressure on their business model.
“But what about the competition coming from hyperscalers?”
AWS, Azure, and Google Cloud will always be giants — but they often serve differente needs. Hyperscalers prioritize scale and broad applicability. Neoclouds like $NBIS serve the high-performance, high-compute, AI-native workloads that hyperscalers can’t always optimize for.
Developers often choose neoclouds for several reasons, such as the ones outlined in the section “Competitive Advantages – Customer Stories”.
And it’s not just startups. Even enterprise customers are being underserved by hyperscalers. During Amazon’s most recent earnings call, CEO Andy Jassy admitted that AWS' AI cloud business could be growing faster — if not for capacity constraints. That’s telling. If the world's largest cloud provider can’t meet demand, there's a massive opportunity for neoclouds to step in.
Importantly, most hyperscalers build infrastructure primarily for internal use. That leaves a wide-open market for companies like $NBIS.
So far, $NBIS serves mostly AI-native startups. But that’s about to change. Over the next 12–24 months, a wave of enterprise adoption is expected, and $NBIS is already adapting its platform to support those broader use cases.
“CapEx becomes obsolete as chips evolve — isn’t that risky?”
This is a common myth. GPUs don’t suddenly become obsolete when new versions are released. For instance, $NBIS still uses H100s for inference — and they were sold out as recently as March (and not at a 90% discounted price as many people claim).
More importantly, $NBIS' vertical integration helps future-proof its hardware. As I explained before, the company works directly with NVIDIA to design next-gen compatible racks, reducing the long-term risk of costly upgrades.
That level of engineering depth is a key advantage over other neoclouds.
“Nebius isn’t profitable — isn’t that a problem?”
The current payback period for Hopper GPUs is around 2.5 to 3 years — which is in line with industry norms.
And keep in mind, $NBIS has been offering software and platform services for free to accelerate customer onboarding and usage. As monetization ramps up and software revenue kicks in, payback periods should shorten meaningfully.
The company expects to eventually generate EBIT margins of 30%.
“Isn’t AI infrastructure just going to be commoditized?”
Many investors believe this, but I strongly disagree — at least for the foreseeable future. The industry is still dealing with a chronic supply-demand imbalance. Just ask OpenAI, NVIDIA, Amazon, or any other big tech company — demand continues to outpace supply, and that gap isn’t closing anytime soon.
This isn’t a commodity market. Not when:
• Supply chains are still constrained
• New models demand 100x more compute
• Enterprises are only just beginning to adopt AI at scale
The winners will be those who can deliver specialized, reliable, and high-performance infrastructure at scale — and that’s exactly what Nebius is building.
10. Subsidiaries: Avride
Avride is $NBIS' autonomous mobility subsidiary and one of the most capital-efficient players in the self-driving ecosystem. Spun out from Yandex’s self-driving division — originally established in 2016 — Avride represents nearly a decade of R&D in AI-powered transportation. The company is advancing both autonomous vehicles and sidewalk delivery robots using a shared tech stack, giving it a unique dual-focus advantage in passenger mobility and last-mile logistics.
Core Capabilities and Technology
Avride is a U.S.-based company that operates globally with R&D hubs in the U.S., Israel, Serbia, and South Korea, supported by a team of 200+ engineers and developers focused on autonomous systems. Its capabilities include:
• Self-Driving Vehicles: Avride has developed fully autonomous cars for ride-hailing, food delivery, and logistics, with testing conducted across the U.S., South Korea, and EMEA. Its robotaxi fleet has already completed 47,000+ rides, logging over 22 million autonomous kilometers with zero serious accidents.
• Delivery Robots: Purpose-built for urban sidewalks and indoor spaces, these autonomous rovers can travel at up to 8 km/h and cover 55 km per charge. To date, they have delivered 200,000+ client orders globally, proving their viability across food delivery, grocery, and small parcel logistics.
Note: This data is from a few months ago, so the updated numbers are even better.
Key Differentiators
• Efficiency & Safety: Avride boasts a better utilization rate than peers like Waymo or Cruise, with a superior safety record across varied terrain and weather conditions.
• Capital Efficiency: Avride has operated with remarkable discipline, spending just $310M to date — a fraction of its competitors — with an estimated $150M more needed to reach profitability.
• Scalable Manufacturing: Avride designs its robots in-house and partners with a manufacturer in Taiwan, allowing rapid, cost-efficient scaling as demand rises.
Strategic Partnerships and Rollout Plans
Avride is strategically expanding in controlled and high-demand environments, supported by notable partnerships:
• Uber: Collaboration for both ride-hailing and autonomous deliveries, including robotaxi launches in Dallas (using Hyundai IONIQ 5s) and delivery pilots already happening in multiple U.S. cities.
• Grubhub: Deployed 100+ robots at The Ohio State University, averaging over 1,000 deliveries/day. Expansion is planned across Grubhub’s network of 360 campuses, reaching 4.5M+ students.
• Hyundai: Joint plan to deploy up to 100 autonomous IONIQ 5s in 2025, with initial services launching in the U.S. later this year.
• Rakuten: Partnership for robot deliveries in Tokyo, with future plans to integrate Avride’s tech into Rakuten Ichiba, Japan’s major e-commerce platform.
Geographic Expansion
Avride has already made significant inroads across Asia, Europe, and the U.S.:
• In South Korea, it became the first company approved to test autonomous vehicles on all public roads nationwide.
• In Japan, robots have received regulatory approval and started deliveries in central Tokyo with Rakuten.
• In the U.S., Avride is now live in Ohio, New Jersey, and Kansas City, with more deployments expected in 2025.
Growth Outlook
Avride’s near- and mid-term roadmap includes:
• 2024: Deploy 10–20 autonomous vehicles and over 100 delivery robots for R&D and pilot programs.
• 2025: Scale to 100+ self-driving cars and 1,000+ robots, targeting contribution profit breakeven.
• 2026+: Begin unsupervised deployments in multiple cities, aiming for double-digit million-dollar revenues and a fleet of 200+ cars and 3,000+ robots.
Valuation Considerations
Valuing Avride is inherently complex due to its early monetization stage and dual focus on both autonomous vehicles and delivery robots. However, a few peer benchmarks help frame the opportunity. Waymo — the most mature player in the space — was last valued at $45B. While Avride is earlier in its commercialization journey, the technology gap is narrowing, especially considering Avride’s capital efficiency and growing real-world deployments.
A more relevant comparison is Motional, valued at $4.1B following Hyundai’s increased stake in 2023. Notably, Avride was founded four years earlier, has completed significantly more autonomous rides, and is diversified across both ride-hailing and sidewalk delivery — an area Motional has not entered. These factors suggest Avride could warrant a higher valuation.
That said, using Motional's $4.1B valuation as a baseline seems a reasonable approach. With strong partnerships, low capital burn, and a growing commercial footprint, Avride has the potential to reach a multi-billion-dollar valuation — possibly into the tens of billions — over the next 5–10 years.
According to $NBIS' recent 20-F filling, the company is “actively seeking third-party investment into Avride”.
11. Subsidiaries: Toloka
Toloka is $NBIS' AI data solutions subsidiary, specializing in high-quality training data for the development and scaling of AI — particularly in the era of Generative AI. In 2024, Toloka successfully pivoted its business model to focus more sharply on serving foundational model producers and GenAI companies, which has already yielded impressive results.
Revenue grew 140% YoY in FY2024, driven by both customer expansion and deeper wallet share across product lines. In Q4, Toloka added several of the world’s largest foundational model producers to its client portfolio. The company projects to more than double its revenue in 2025, reaching $50–70M, and continues to diversify across both its Classic and Evolved GenAI solutions.
Toloka’s customer base includes big tech firms like Microsoft and ServiceNow, along with top-tier AI startups. Its reputation is built on deep ML expertise, robust research partnerships, and an ability to consistently deliver the high-quality data required to train cutting-edge AI models.
Recently, Toloka secured a $72M strategic investment led by Bezos Expeditions (the investment arm of Jeff Bezos) and Mikhail Parakhin, CTO of Shopify. Although the valuation of this round wasn’t disclosed, the investment marked a significant shift in Toloka’s governance structure. As part of the deal, Nebius voluntarily gave up majority voting power, meaning Toloka will now be deconsolidated from $NBIS’s financial statements. However, Nebius remains a major shareholder and retains a strong economic interest in Toloka’s future growth.
Toloka’s CEO noted that another funding round is expected in the near future — a potential catalyst that could further elevate the company’s visibility and valuation.
Valuation Considerations
While private companies in this space, such as Scale AI, often command revenue multiples between 10–20x, I took a conservative approach and applied a 5x P/S multiple. Using the midpoint of Toloka’s 2025 revenue guidance ($60M), this would imply a valuation of approximately $300M.
Granted, using a P/S multiple has its limitations, especially when applied to early-stage companies with high growth potential. However, in the absence of disclosed deal terms, it provides a reasonable baseline.
If that seems high, it’s worth revisiting the scale and importance of the AI data infrastructure market — and Toloka’s growing role within it.
12. Subsidiaries: TripleTen
TripleTen is a leading EdTech platform based in the United States, recognized for its high job placement rates, strong student satisfaction, and consistently positive graduate outcomes. The platform delivers an AI-powered e-learning experience that combines affordability for learners with operational scalability for the business. As it continues to grow, TripleTen is expanding across both B2C and B2B markets — with a particular focus on the U.S. and Latin America — unlocking new revenue streams and long-term growth opportunities.
In Q4 2024, TripleTen reported a 100% YoY increase in student enrollment, driven by strong performance across both its U.S. and LATAM operations. The company’s tuition rates remain among the most affordable in the market for comparable bootcamp-style programs, giving it a compelling value proposition in the competitive online education landscape.
Key Differentiators
• Affordable Pricing: TripleTen leverages AI-driven automation and diversified trainer sourcing to keep training costs low, allowing it to offer courses at highly competitive price points.
• Top-Rated in the U.S.: 87% of graduates find employment within six months — well above industry averages. High levels of student satisfaction continue to fuel strong word-of-mouth growth.
• Proprietary Technology: The company has developed a robust tech stack that enables fast, low-cost course launches and localization, facilitating rapid market expansion.
• Personalized Learning Support: Students receive expert tutoring and hands-on projects that mirror real-world job challenges, boosting skill acquisition and job readiness.
Growth Levers
• B2C Expansion: TripleTen is launching new programs in high-demand areas such as Cybersecurity Analysis and UI/UX Design, targeting emerging job market needs.
• Geographic Scaling: LATAM is a major growth frontier. Deep localization and tailored course offerings are enabling TripleTen to rapidly scale in the region.
• B2B Product Development: The company is creating corporate bootcamps in both English and Spanish, along with role-specific assessments for data and development positions.
• Alumni Network Utilization: TripleTen is strengthening its alumni community to drive enrollment efficiency and lower customer acquisition costs.
• Financial Efficiency: The business operates with a highly efficient cost structure where CAC is fully recouped via initial student payments, ensuring strong unit economics.
Valuation Considerations
While TripleTen is likely the least significant of $NBIS' three main subsidiaries in terms of near-term valuation impact, it remains a valuable strategic asset — especially given its potential to feed skilled talent into $NBIS' ecosystem.
TripleTen is projected to more than double its revenue in 2025, with estimated top-line reaching $40–60M. Comparable education technology firms in the private market typically trade at 1–2x revenue multiples, though they often grow at much slower rates.
To stay conservative, we apply a 1.5x multiple to the midpoint of revenue guidance ($50M), arriving at a valuation estimate of $75M.
Beyond its financial contribution, TripleTen offers potential strategic synergies with $NBIS' broader AI and infrastructure ecosystem. As the company trains thousands of job-ready developers and data professionals each year, it could emerge as a talent pipeline — not just for $NBIS itself, but also potentially for its enterprise clients.
13. 28% Stake in ClickHouse
$NBIS holds a 28% equity stake in ClickHouse, one of the fastest-growing players in the database management systems (DBMS) space.
ClickHouse is an open-source, high-performance columnar database optimized for real-time analytics at scale. Its architecture allows users to query massive datasets at lightning speed, making it a powerful tool for big data workloads, business intelligence, log analytics, and AI/ML pipelines.
Engineered for efficiency, scalability, and cost-effectiveness, ClickHouse stands out in a crowded market dominated by legacy systems. Its ability to handle billions of rows per second while maintaining low latency has made it the go-to solution for modern data-intensive applications.
The platform is now widely used in AI, finance, observability, and cybersecurity, and has become a core analytics layer for many high-performance infrastructure stacks. Importantly, it has attracted a blue-chip client base that includes: Meta, Microsoft, Spotify, Sony, Cloudflare, HubSpot, Shopee, IBM, ServiceNow, and more.
This growing adoption reflects ClickHouse’s ability to deliver enterprise-grade performance and reliability while maintaining the agility of an open-source solution. Its flexibility and speed are particularly well-suited for emerging AI and real-time decision-making use cases, where traditional databases fall short.
Valuation Update and Implications
On May 9, reports surfaced that ClickHouse is in advanced talks to raise hundreds of millions in a new funding round, led by Khosla Ventures. The round is expected to triple the company’s valuation to $6B, a massive step up from its last valuation in 2021.
This development is highly significant for $NBIS:
• A $6B valuation implies that Nebius’ 28% stake is now worth $1.68B (pre-dilution).
• That figure represents over 20% of Nebius’ entire current market cap, highlighting the enormous hidden value in this single investment.
• Importantly, this new funding round is acting as a major re-rating catalyst for $NBIS, as investors reassess the value of its strategic assets.
ClickHouse is no longer just an interesting side bet — it’s becoming one of $NBIS' most valuable holdings. As the world’s data infrastructure shifts toward real-time analytics and AI-native architectures, ClickHouse is positioned to be a foundational layer in that transition.
Note: To account for the dilution associated with this new valuation round, I’ll apply an 80% weighting to the $1.68B figure in the Sum-of-the-Parts analysis.
14. Nebius Group Valuation: Sum-of-the-Parts Approach
When you invest in $NBIS, you’re not just buying a fast-growing AI cloud provider — you're getting exposure to three high-potential subsidiaries and a 28% stake in ClickHouse, one of the hottest companies in data infrastructure.
The best way to assess the company’s value is through a Sum-of-the-Parts framework that captures both the core business upside and the hidden value in its strategic holdings.
Core Business
My first valuation model was based on the company’s medium-term target of 240 MW of deployed capacity. But following management’s latest updates, it’s clear that the bar has been raised substantially.
CEO Arkady Volozh recently confirmed that $NBIS is now aiming for 1 GW of capacity by 2026 — a dramatic increase from the 60–100 MW initially projected for 2025.
For context, $NBIS' proprietary Finnish data center, once fully expanded to 75 MW, is expected to generate $1B in ARR at full capacity. Extrapolating that performance across 400 MW+ suggests multi-billion-dollar revenue potential in the short term.
To remain conservative, I assume:
• 400 MW capacity by 2027
• 80% utilization
• $12.5M in ARR per MW (based on Finnish data center)
→ ~$4B in ARR by 2027
From a profitability standpoint, I applied a 30% normalized EBITDA margin, which is still below long-term guidance (30% EBIT margin) and peers like CoreWeave (64% adj. EBITDA margin in 2024).
This yields $1B in EBITDA by 2027.
To value this EBITDA stream, I used a 20x EV/EBITDA multiple, which is very reasonable when compared to CoreWeave’s current implied multiple of ~40x (and lower than its 22.5x IPO multiple). That gives us a $20B valuation for the core business by the end of 2027.
Applying a 15% discount rate to account for execution risks, this implies a $15.1B valuation by the end of 2025 — excluding any value from its subsidiaries or strategic investments.
It’s important to note this:
“In March, we were fully sold out… We got additional GPUs and are selling the additional capacity well.”
Given the strong demand $NBIS is seeing, I wouldn’t be surprised if my assumptions end up being significantly outperformed. If that happens, the upside could be enormous.
Sum-of-the-Parts Valuation
Now that we have estimates for each part of $NBIS, let’s sum everything up to arrive at a price target for the end of this year:
$15.1B + $4.1B + $300M + $75M + $1.34B = $20.915B
Again, since the company is expected to spend its $2.4B cash pile fairly quickly, I’m excluding it from this valuation. Additionally, given that the company plans to raise more cash, but it’s uncertain whether it will be dilutive, I’ll assume a 10% increase in shares outstanding from the end of 2024 to the end of 2025 (from 235.75M to 259.3M).
As such, our final valuation for $NBIS is:
$21.255B / 259.3M = $80.66/share
This represents an upside of over 120% compared to its price as of writing this.
IMO, some of my assumptions were fairly conservative — but even if you tweak them, the upside potential remains substantial.
15. Catalysts That Could Trigger a Rerating
While $NBIS is already executing on a multi-billion-dollar opportunity in AI infrastructure, the stock remains under the radar for most investors — trading at a sharp discount to its intrinsic value. However, several catalysts are on the horizon that could drive a substantial revaluation in the next 6–18 months. Here's what to watch:
1) Initiation of Institutional Coverage
Due to the unconventional method by which $NBIS entered public markets, the company currently has virtually no sell-side coverage. This has led to limited institutional awareness and little visibility across financial media and research platforms.
As the company continues to report quarterly earnings, demonstrates execution against aggressive growth targets, and closes high-profile contracts, analyst coverage is inevitable. Once institutions begin modeling the ARR trajectory and appreciating the scale of $NBIS' infrastructure build-out, a rerating becomes almost guaranteed.
2) Avride Funding Round
Avride, $NBIS' autonomous vehicle subsidiary, could soon become a high-value funding lever for the core AI cloud business. Management has made it clear that the team is actively exploring strategic partnerships or capital injections to fund Avride’s future independently:
“We are one of the last independent AV companies... We aim to participate in the upside in the short to medium term. In the long term, we could see Avride as a source of capital for AI expansion.”
A deal with a large automaker, Tier 1 supplier, or tech company would instantly:
• Validate Avride’s technology
• Provide a benchmark valuation
• Unlock capital for Nebius’ expansion
• Remove uncertainty around funding needs
Considering that Avride’s true value is likely far higher than what the market is currently implying, this could serve as a major catalyst.
3) Large Contract Announcements
$NBIS is actively engaging with larger customers looking for long-term GPU cloud contracts. Management has already stated that the company feels very comfortable hitting its $750M–$1B ARR target by December 2025, and has more demand than it can currently supply.
“We had many promising discussions that may generate further revenues... We’re prepared to scale our New Jersey facility if a large customer comes on board.”
These longer-duration contracts with enterprise or hyperscaler clients could provide:
• Revenue visibility
• Utilization guarantees
• A signal of validation to the broader market
Any such announcement — especially with a blue-chip name — could drive a powerful sentiment shift and increased investor interest.
4) The European AI Infrastructure Boom
There’s a tectonic shift happening in Europe. After years of lagging behind the U.S., the EU is now heavily investing in AI infrastructure — and seeking to reduce dependence on U.S. tech giants like AWS, Azure, and Google Cloud.
Recent developments include:
• Dutch parliament motions to develop a national cloud platform
• Calls for “Buy European” mandates in public tenders
• €200B InvestAI initiative by the European Commission
• France’s €109B AI roadmap, spearheaded by President Macron
As Europe’s most energy-efficient and cost-effective AI cloud provider, headquartered in the Netherlands, $NBIS is perfectly positioned to benefit.
At the same time, $NBIS is expanding even more aggressively in the U.S. with GPU clusters in Kansas City and a new proprietary facility in New Jersey — showing that it can scale both sides of the Atlantic.
5) Clarification of Some Perceived Risks
One of the biggest lingering overhangs for $NBIS is legacy perception risk tied to its origins. Despite being headquartered in the Netherlands and having zero operational or ownership exposure to Russia, many platforms still mistakenly associate $NBIS with its Yandex roots.
A clear example: Several outlets — including Bloomberg terminals — mistakenly linked $NBIS to Yandex's Q1 earnings date, creating confusion among investors.
This misinformation has likely kept risk-averse investors on the sidelines. But with every quarterly report, every customer win, and every expansion update, $NBIS is proving it’s a fully independent, Western-facing company — with a leadership team that has already built and scaled one of the most sophisticated cloud stacks in the world.
As sentiment catches up to reality, we expect this perception gap to close rapidly — removing a key barrier for institutional capital.
6) Operating Leverage
$NBIS is still in the heavy capex and growth investment phase, which is typical for a hyper-growth company in its early stages. As a result, some investors are hesitant due to negative near-term cash flows.
However, the business model is inherently high-margin once infrastructure is in place. Management has already guided to 30% EBIT margins long-term, putting $NBIS in line with top-tier cloud providers.
“Revenues from additional services will be growing... In the long term, we aim to achieve 30% EBIT margins like AWS.”
As new capacity comes online, the company will begin to demonstrate strong operating leverage — which is often the turning point when investors pile into the name.
Once that happens, the current valuation will look like a steal in hindsight.
All in all, the clock is ticking. Between a vastly underappreciated core business, three valuable subsidiaries, and a stake in one of the fastest-growing data infrastructure companies (ClickHouse), and multiple near-term catalysts, $NBIS is set up for a major rerating.
Each catalyst on its own could drive material upside. Together, they paint a picture of a company that’s misunderstood, mispriced, and massively underfollowed — but not for much longer.
16. Arkady Volozh: Background and Alignment
The Founder and CEO of $NBIS, Arkady Volozh, is not your average tech executive. He’s a visionary with a decades-long track record of building and scaling successful technology ventures — and now, he’s bringing that expertise to one of the most ambitious AI infrastructure companies in the world.
Background
Arkady began his entrepreneurial journey long before most of today’s tech giants existed. Born in 1964 in Kazakhstan, he studied applied mathematics in Moscow and quickly gravitated toward computing and software. In the late 1980s, he co-founded a series of ventures — including Magister, CompTek, and eventually Arkadia Company — that gave him a front-row seat to the birth of the digital age in the post-Soviet world.
By the early 1990s, Arkady had turned his focus to search technologies, laying the groundwork for what would eventually become Yandex, which he co-founded in 1997. Under his leadership, Yandex evolved from a niche Russian-language search engine into a sprawling tech ecosystem worth over $30B at its peak. From AI and cloud to autonomous driving, ride-hailing, e-commerce, navigation, smart devices, and education platforms — Yandex became one of Europe’s most formidable tech companies.
Alignment
Today, Arkady owns ~15% of $NBIS and holds the majority of voting power, allowing him to guide the company with a long-term vision. What’s more telling is that ~90% of his personal net worth is tied up in $NBIS stock, signaling an extraordinary level of financial alignment with fellow shareholders.
He isn’t just invested — he’s essentially all in.
This alignment is critical. In a sector where short-term pressure often distorts long-term decision-making, having a founder who is both emotionally and financially committed ensures that strategy isn’t dictated by quarterly noise.
A Leader with Principles
Now a dual citizen of the Netherlands and Israel, Arkady has firmly distanced himself from Russia and the geopolitical baggage that might once have shadowed his legacy. He resigned from Yandex’s Russian operations in 2022 and played a key role in orchestrating the company’s restructuring and international pivot.
In 2023, he publicly condemned the Russian invasion of Ukraine, calling it “barbaric” and expressing deep empathy for its victims. This wasn’t a performative move — it came after 18 months of silence, which he used to quietly relocate 2,000 Yandex employees fleeing the war. This action speaks volumes about his values and character. It also reinforces his desire to build something ethical, global, and enduring.
More Than Just a Startup
What Arkady is building with $NBIS is unlike any other early-stage company. This isn’t some overfunded startup run by first-timers chasing hype. It’s a mission-driven organization led by an elite team of seasoned operators — many of whom followed Arkady from Yandex — now working without the limitations of their previous environment.
There are very few opportunities in public markets to invest alongside a founder of this caliber, especially one with this level of financial and emotional buy-in. Arkady has spent more than 30 years building cutting-edge technologies and scaling platforms that serve millions. Now, he’s doing it again — only this time, with full independence, fresh capital, and a global canvas.
If you’re looking for founder-led companies where leadership, mission, and shareholder interests are fully aligned, $NBIS checks every box.
17. Final Thoughts
At its core, this isn’t just another tech stock. $NBIS represents something rare in public markets — a misunderstood, undercovered company building critical infrastructure at the heart of the AI revolution, led by one of the most accomplished tech entrepreneurs of our time.
Despite operating in one of the most exciting sectors globally, $NBIS still flies under the radar. Most investors don't realize what they're looking at — a vertically integrated AI cloud provider with a path to multibillion-dollar revenue, three high-potential subsidiaries, a strategic stake in ClickHouse, and one of the best founder alignments you’ll find in public markets.
Yes, there are risks. Yes, the stock is volatile. But when you zoom out, it becomes clear: this is an asymmetric opportunity with the potential to yield outsized returns in the short, medium, and long term.
If the company keeps executing — and I believe it will — today’s share price will look like a rounding error in hindsight.
For all these reasons and more, $NBIS remains my largest position — and I can’t wait to see how this story unfolds.
18. That’s it! I hope you found this thread useful.
This Deep Dive into $NBIS took over 11,000 words to put together — but now you can understand exactly why it’s my largest position.
If you found this thread valuable, I’d genuinely appreciate a follow. 🙌🏻
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Everyone’s talking about $HIMS now, but I’ve been covering it since it was trading in the low single digits.
I’ve analyzed every single quarterly report since late 2020.
Here’s a detailed breakdown of everything you need to know about yesterday’s Earnings Report: 👇🏻🧵
1. Let's start with the Financial Highlights.
• Revenue: $586M (+111% YoY) vs. $538.9M est. 🟢
This marks the strongest revenue growth ever for $HIMS, driven largely by explosive demand for compounded GLP-1s. While the company expects a meaningful deceleration in this category as commercial semaglutide comes off shortage, revenue is still projected to grow >60% YoY in 2025.
• Revenue excluding GLP-1s: “Growth of nearly 30% YoY”
A sharp deceleration from previous quarters, likely contributing to the post-earnings stock selloff. However, the slowdown stems from a strategic reallocation of marketing spend toward weight loss products in anticipation of the semaglutide shortage ending. With the transition complete, $HIMS can now refocus on its broader portfolio — suggesting core revenue growth could accelerate from here.
“Rotation takes time to do efficiently, so we chose to reduce overall spend as opposed to recalibrate weight-related spend to other categories after the end of the semaglutide shortage in February.”
At the same time:
“We're seeing more subscribers come to our platform through organic and other lower-cost channels.”
• Subscribers: 2.366M (+38% YoY)
• Monthly Online Revenue per Avg Subscriber: $84 (+53% YoY)
A substantial increase, again, primarily driven by GLP-1 offerings. However, $HIMS guided that this number will moderate going forward as users transition from its compounded GLP-1s.
• Q2 Revenue Guidance: $540M vs. $567M est. 🔴
Another factor contributing to the selloff is that this marks the first time $HIMS has missed guidance and projected a sequential decline in revenue. This is clearly tied to the resolution of the semaglutide shortage, and it was unrealistic to expect the company to sustain triple-digit growth without the outsized contribution from compounded GLP-1s. As such, I wouldn’t draw overly negative conclusions from this guidance miss.
• Gross Margin: 73% vs. 77% est. 🔴 (down from 82% YoY)
While certain efficiencies continued to improve — particularly through economies of scale driven by increased volume at affiliated pharmacies and lower medical consultation costs as a percentage of revenue — gross margins declined due to a higher mix of revenue from compounded GLP-1s. With the semaglutide shortage now resolved, this trend is expected to reverse, and the CFO stated during the earnings call that gross margins should improve in Q2.
• Adj. EBITDA: $91.1M vs. $61.8M est. (+182% YoY) 🟢
• GAAP EPS: $0.20 vs. $0.12 est. 🟢
• Operating Cash Flow: $109M (+322% YoY)
• Free Cash Flow: $50.1M (+321% YoY)
• Reiterates FY2025 Revenue guidance of $2.3-2.4B vs. $2.323B est. (+56-63%) 🟢
• Raises FY2025 Adj. EBITDA guidance to $295-335M vs. $296.6M est. (+67-90%) 🟢
2. Every GAAP operating expense category continued to decline as a percentage of revenue, highlighting the strong efficiency of $HIMS' business model and its impressive ability to unlock operating leverage at scale.
On marketing efficiency, management noted:
“We benefited from efficiencies related to new product launches and improving organic customer acquisition trends, which more than offset higher spend driven in part by our first Super Bowl commercial.”
While some quarter-to-quarter volatility is expected, the company remains confident in its ability to drive 1 to 3 percentage points of marketing leverage per year.
As a result, $HIMS managed to double its net profit margin while simultaneously doubling its revenue YoY, leading to a fourfold increase in GAAP EPS over the past twelve months — an exceptional performance by any standard.
The same momentum is evident in its cash flow generation. While FCF wasn’t as high as in prior quarters, it still grew by 321% YoY, and operating cash flow reached a new all-time high. The only reason FCF didn’t follow suit was due to a deliberate increase in Capex aimed at strengthening infrastructure. These investments are strategically aligned with the company’s long-term vision and will reinforce $HIMS' competitive advantages and leadership in the sector.
Given the company’s strong balance sheet and highly efficient business model, allocating capital toward long-term infrastructure — even at the expense of short-term margins — appears to be a prudent and value-accretive decision.
Examples of recent Capex investments designed to enhance infrastructure and support the company’s long-term goal of serving tens of millions of subscribers:
• Expanded internal fulfillment footprint from 400K to nearly 700K sq. ft. in Arizona
• Upgraded automation equipment to enable personalized and scalable precision medicine
• Built out sterile fulfillment capacity to support new categories like low testosterone therapy and menopause support
• Investing in diagnostic lab capabilities to enhance personalization and lower consumer friction
In summary, these infrastructure investments reflect $HIMS' clear intention to build a durable, defensible platform capable of scaling efficiently over the long term. Rather than optimizing for short-term gains, the company is positioning itself to capitalize on massive future demand across multiple high-growth categories — a strategic approach that underscores both management’s discipline and the strength of the underlying business model.
My first Deep Dive on $TEM reached over 1M investors.
Today, I’m back with an update — and a brand new Valuation Model where I break down each segment individually to estimate the company's fair value.
Here’s everything you need to know about Tempus AI: 👇🏻🧵
1. Introduction
$TEM is a cutting-edge precision medicine company founded in 2015 by Eric Lefkofsky. The inspiration for Tempus arose from Lefkofsky’s personal life — his wife’s battle with breast cancer revealed how limited the role of technology was in shaping her care. Determined to change this, Lefkofsky set out to integrate advanced technology into healthcare, addressing a critical gap in the industry.
At its core, $TEM leverages AI to analyze vast amounts of clinical, imaging and molecular data. Its goal is ambitious yet clear: to revolutionize healthcare by enabling personalized treatment decisions, advancing drug discovery, and facilitating earlier and more accurate disease diagnoses.
Tempus AI initially focused only on oncology, enabling doctors to deliver tailored treatments for cancer patients. This “intelligent diagnostics” model proved so effective that the company expanded its efforts into other critical areas, such as neuropsychology and cardiology.
Today, $TEM's technology empowers thousands of physicians and life science companies, making a tangible difference in patients' lives.
2. The Booming Market for AI in Healthcare
The global AI in healthcare market is experiencing unprecedented growth, projected to expand from $15B in 2024 to a staggering $164B by 2030, representing a CAGR of 49.1%.
This explosive growth is driven by several factors, including:
• Increased Investments: Significant public and private sector funding is accelerating the adoption of AI technologies in healthcare.
• Rapid AI Proliferation: The integration of AI into healthcare systems is transforming diagnostics, treatment planning, and patient outcomes.
• Focus on Human-Aware AI Systems: Advances in AI technology are enabling more personalized and human-centered solutions, which are crucial in the healthcare domain.
Tempus AI is uniquely positioned to capitalize on some of these trends. With its AI-powered precision medicine platform, $TEM is not only a pioneer in embedding AI into healthcare workflows but also a leader in driving real-world impact.
Today brought several key updates on $NBIS that further reinforced my high conviction in the company.
Here’s a breakdown of the most important takeaways: 🧵👇🏻
1. First, it’s worth noting that the source is a Seeking Alpha article titled “Nebius: Minutes Of Our Call With The Company.”
I highly recommend reading the full piece.
The author had a brief call with $NBIS' IR team and shared a summary of the conversation.
2. $NBIS has more demand than it can supply.
“Our customer base is in strong demand. Those customers are utilizing our full stack, and we are providing them with significant additional value beyond the GPU.
In March, we were fully sold out, and we got additional GPUs and are selling the additional capacity well.
We feel very good about the demand for our services.”
So no, the market is not saturated by any means — and $NBIS has key differentiators that make it a top choice for customers.
Until last week, my portfolio consisted only of founder-led stocks, but I finally made an exception by opening a position in $DLO.
Here’s a thread breaking down my investment thesis and why I believe its CEO deserves my trust: 👇🏻🧵
1. Origins
$DLO was founded as a response to a pressing issue in Latin America: the difficulty of making online payments. The company’s origins trace back to Uruguay, where Sebastián Kanovich, one of the key founders, first encountered the problem firsthand. As a young economist with no prior background in technology, Sebastián stumbled into the fintech world by chance when he realized that making international online purchases was nearly impossible for consumers in his home country. His personal frustration — specifically, being unable to buy an NBA League Pass or shop online without borrowing a credit card — led him to recognize a larger systemic issue.
He joined forces with two partners who had already begun assembling an initial team to address these payment challenges. At the time, he was working at Santander Bank but was drawn to the opportunity to build something innovative. The founding team’s first venture into payments was a small-scale operation, focusing on a single solution for one customer. They initially operated with a kiosk model, solving local payment issues in Uruguay before expanding their scope.
The company’s first major breakthrough came with Brazil’s Boleto system, a widely used cash-based payment method. Traditionally, Brazilian consumers would generate a Boleto — a type of payment slip — and physically pay it at a bank or kiosk. $DLO developed a solution that digitized this process, allowing users to issue Boletos at checkout and complete transactions seamlessly. While the team initially believed they had solved a major problem, they soon realized that payment challenges extended far beyond Brazil and involved a wide array of localized payment methods across Latin America, Africa, and Asia.
$DLO's growth trajectory accelerated as global companies began seeking ways to expand into emerging markets. Initially, large U.S. firms like Facebook and Google were hesitant to invest in Latin American payment solutions, focusing instead on European expansion. However, as emerging markets gained importance in global business strategies, interest in $DLO's services grew. The company transitioned from offering just a single payment method to aggregating over 900 different payment solutions across various regions, all accessible through a single API. This comprehensive approach significantly increased $DLO's value proposition.
A pivotal moment came when GoDaddy became $DLO's first major U.S. client. Initially, $DLO attempted a direct-to-consumer (B2C) model, launching a prepaid card under its own brand. However, GoDaddy’s feedback was clear: customers didn’t care about the brand, they cared about seamless payment solutions. This insight pushed $DLO to pivot towards a B2B model, positioning itself as an infrastructure provider rather than a consumer-facing brand. This shift proved to be a game-changer, enabling the company to secure more enterprise clients and scale its operations globally.
2. Current Operations
$DLO's mission is to enable global merchants to connect seamlessly with billions of emerging market users.
The company provides payment solutions for some of the world’s largest enterprises, including Amazon, Uber, Microsoft, Shopify, Google, Spotify, Tencent, Shein, Salesforce, Nike, Booking, and Shopee, among others. By simplifying the complex payment landscapes of emerging markets, $DLO helps businesses expand into high-growth regions without the typical friction associated with cross-border transactions.
How Dlocal Makes Money
$DLO operates a high-margin, scalable business model built around direct integrations with global merchants. Once onboarded, companies can access $DLO's full suite of payment solutions through a single API and contract, eliminating the need for multiple legacy providers. This direct connection serves as both a competitive advantage and a barrier to entry, making incremental transaction volume highly accretive (I'll address these topics later).
The company generates revenue primarily through transaction fees on pay-in (consumer payments) and pay-out (merchant disbursements) services. These fees can be a percentage of the transaction value, a fixed fee per transaction, or a spread on foreign exchange conversions. $DLO also charges for services like chargeback management and installment payments, which further contribute to its revenue stream.
Revenue Breakdown:
• Processing fees – Charged as a percentage of transaction value or a fixed fee per approved transaction.
• Installment fees – Fees applied to transactions where consumers opt for installment payments.
• Foreign exchange fees – A spread on currency conversions in cross-border transactions.
• Other transactional fees – Includes chargeback and refund fees, as well as ancillary services.
•Other revenues – Setup fees, maintenance fees, and other small service charges.
Cost Structure
$DLO's cost of services primarily consists of fees paid to financial institutions, such as banks and local acquirers, for processing payments. These costs vary depending on settlement periods and payment methods. Additional expenses include infrastructure costs, salaries of operational staff, and amortization of internally developed software.
One of the key risks in $DLO's model is foreign exchange exposure, as transactions often involve multiple currencies. However, the company mitigates this risk through hedging strategies, using derivatives to offset currency fluctuations.
Apart from COGS, $DLO's main costs fall into two categories:
• Technology & Development: This includes salaries and wages for tech teams, infrastructure costs, information security expenses, software licenses, and other technology-related investments.
• SG&A: These are the regular operating expenses required to run the business.
Since the arrival of the new CEO, $DLO has increased spending on technology infrastructure and back-end capabilities to enhance its solutions and maintain its position as an innovator with a long-term mindset. While these investments initially pressured margins, they are strategically important for long-term value creation — I’ll revisit this when discussing the company’s future margin recovery.
Overall, $DLO's business model is highly scalable, with minimal incremental costs, positioning it to unlock significant operating leverage as it continues its impressive growth trajectory.
Key Performance Indicator: TPV Growth
Total Payment Volume (TPV) is probably the most important metric to gauge $DLO's relevance and execution over the past few years.
From 2016 to 2023, the company grew from just $136M in TPV to $17.7B — a CAGR of over 100%. 🤯
In 2024, growth is expected to exceed 40%, highlighting $DLO's continued expansion in emerging markets and its ability to attract major global enterprises seeking seamless payment solutions.
With a massive untapped market ahead, the company still has significant room to scale.
Founder-led companies have historically outperformed the market by a wide margin — but choosing wisely is crucial.
Here are 10 interesting companies where the founder remains both the CEO and the largest shareholder: 👇🏻🧵
1) $NBIS
• Emerging leader in AI infrastructure, providing a full-stack AI cloud platform, high-performance computing, and advanced data center solutions
• Its story begins within Yandex, but it’s now fully independent, with no ties to Russia — founder and executives relocated and changed nationalities to avoid sanctions
• Positioned to benefit from the skyrocketing demand for AI computing power, offering cost-effective and energy-efficient solutions
• Strong competitive edge in cost efficiency, with total GPU costs up to 25% lower than industry averages due to vertical integration and strategic partnerships
• Industry-leading energy efficiency, with best-in-class Power Usage Effectiveness (PUE) of ~1.13
• Strategic partnership with NVIDIA, which recently invested in $NBIS — will be the first provider in Europe to offer NVIDIA’s new Blackwell GPUs in 2025
• Expanding aggressively, with plans to triple its Finland data center capacity and launch new facilities across Europe and the U.S., targeting 240,000 GPUs by 2027+
• Surging Annual Recurring Revenue (ARR), growing from $21M in 2023 to an estimated $170M–$190M in 2024, with projections of $750M–$1B ARR by 2025
• Strong balance sheet with over $2B in cash and no debt, but likely to raise more capital to accelerate expansion
• Several non-core divisions, including Avride (autonomous driving tech with Uber partnerships), Toloka (AI data solutions), and TripleTen (edtech), adding optionality and potential future value
• Trading at an attractive valuation compared to peers, with 7–8x forward EV/ARR despite expected 4–5x YoY growth in 2025 and 100%+ growth in 2026
• Potential for over 30% CAGR over the next few years IMO, with significant upside as institutional investors recognize its growth potential
2) $HIMS
• Cash-pay model that bypasses the need for insurance
• Provides high-quality, personalized, and affordable healthcare treatments (involved in the whole process)
• Positioned to benefit from many secular trends in the huge telehealth market
• Optionality to launch new categories and easily expand into new markets (several potential catalysts)
• Customer-centric approach that delivers a better experience than its peers
• Innovation stack combined with remarkable execution positions it for continued success
• Many years of customer data make some of its competitive advantages harder to replicate, particularly the personalization of dosages to improve outcomes and reduce side effects
• 2M+ subscribers growing 40%+ YoY
• Percentage of personalized subscribers increasing at an incredibly fast rate
• Improving retention rates, a critical factor in this sector
• Highly efficient distribution network, with thousands of affiliated pharmacies
• Investing in infrastructure to verticalize its supply chain
• Capex-light business model with impressive margins (75%+ gross margins, 15%+ FCF)
• Consistently surpassed analysts' estimates since inception
• Growing revenues by 65%+ this year, and likely to compound >20%/year over the next 5 years, with further operating leverage expected
• No debt and an increasing cash pile, even while executing buybacks and reinvesting in growth and optimization
3) $HITI
• The company was founded in 2009 and initially focused on selling cannabis consumption accessories
• After Canada announced the upcoming legalization of recreational cannabis, $HITI leveraged its existing customer base to expand into selling the plant itself
• Around 2018-2020, the company entered the equity markets and used its easier access to capital to expand its store footprint aggressively
• During the same period, $HITI acquired several e-commerce brands selling CBD products and consumption accessories, which had much higher margins than its core business
• In 2021, $HITI launched a discount club model for its retail stores — with consolidated margins higher than any competitor due to its acquisitions, $HITI could offer cannabis at remarkably low prices, attracting loyal members and rapidly gaining market share
• Its market share grew from less than 5% to over 11% in three years, and is expected to reach 15% over the next years as dozens of stores, including large corporations, are going bankrupt every month
• While the company sacrificed margins to win the price war, economies of scale and other initiatives enabled it to become both FCF and net income positive, with margins trending up
• After the success of its free discount model, which gathered over 1.5M members in under three years, $HITI launched ELITE, a paid membership with even better offers (members are growing 160%+ YoY)
• There's still significant market potential to capture in Canada, as well as international catalysts like the expansion into the U.S. and Germany
• While the previously mentioned e-commerce brands were important to sustain the initial launch of $HITI's discount model, they have recently become a hurdle — to deal with that, the company announced a global paid membership which aims to consolidate the fragmented CBD market
• White-label products will play a crucial role in improving $HITI's margins over time, with the company aiming to increase their share from 2.5-3% of SKUs to 20-25% of all store offerings in the long term
• The average $HITI store generates ~$2.6M in annual revenue, compared to $1.0M for peers, not only due to its proven business model but also because the company focuses on selecting the best locations
• $HITI recently acquired Purecan, a Germany wholesaler of medical cannabis. This move is highly strategic given that most medical cannabis in Germany is imported from Canada, and as the largest cannabis retailer in Canada, $HITI has established relationships with every major Licensed Producer (LP) — providing a significant competitive advantage
• This deal represents a transformational opportunity for $HITI to expand its TAM and experience significant operating leverage
Last week, $HITI released its Q4 2024 earnings report.
As a longtime shareholder, I’ve been closely following the company for years.
Here’s a breakdown of everything: 🧵👇🏻
1. Let’s start with the Financial Results.
Record revenue of $138.3M, exceeding consensus estimates of $135M. 🟢
Signs of revenue acceleration, with double-digit growth expected in 2025. The core business grew 12% YoY, but overall growth was slightly offset by underperformance in e-commerce.
Despite a $35.2M revenue increase, total expenses declined by $5.9M, demonstrating disciplined cost management.
FCF increased from $7M to $22M YoY (+217%), highlighting improved operational efficiency.
Adjusted EBITDA rose 25% YoY to $38.3M, with margin expansion from 6.3% to 7.3%.
Achieved Net Income profitability (excluding non-cash impairments) for the first time: $1.2M vs. a ($6.7M) loss YoY.
Same-store sales (SSS) increased 0.4% YoY and 3% QoQ, outperforming the broader cannabis retail market, which declined 1% YoY. I was expecting SSS to show slightly better growth, but it’s still a solid performance given the overall market conditions.
Gross margins remained stable YoY at 26% (down from 27% QoQ) – The company is avoiding price increases to allow weaker competitors to exit the market, setting up for future margin expansion.
“More and more competitors are leaving the race, big chains are struggling, middle size chains are struggling, independents are struggling. So as more competitors get out of the race, there's not going to be a lot of competitors remaining to be waging a price war with us. And at that point, we have a tremendous opportunity to increase gross margins in our core Canadian cannabis business.”
$HITI ended the fiscal year with a record cash balance of $47.3M and no debt maturities until September 2027. Total debt stands at $27M, with only $12M maturing in 2027.
All in all, $HITI delivered a strong quarter. As industry consolidation progresses, the company is well-positioned to enhance margins and drive sustained long-term growth. It’s important to note that the overall market has been struggling due to the resurgence of the illicit market, but management has been highly competent in navigating these short-term headwinds.
2. Footprint Expansion: Store Openings & Future Outlook
Accelerated and Self-Funded Store Growth:
• 29 new stores opened in 2024 (vs. 13 in 2023), more than doubling the prior year’s expansion and hitting the high end of guidance (20-30 stores). The company now owns 191 stores across five provinces.
• Growth was primarily organic, with only one store acquired — demonstrating disciplined expansion.
• Notably, all new stores were funded entirely through internal FCF, a rare achievement in the sector.
• Cost per store opening: ~$260K in build-out costs + $100K–$150K in working capital.
2025 Expansion Plans:
• Targeting another 20–30 new stores, all organically developed and funded by internal FCF.
• Management remains highly selective on M&A, despite ongoing inbound interest from struggling small chains and independent operators. Raj Grover has emphasized acquiring only highly strategic locations at deeply distressed valuations — evidenced by the last store acquisition in June 2024 at just 1.5x annualized Adj. EBITDA.
New store openings require upfront CapEx, working capital, and employee ramp-up, temporarily weighing on consolidated results. The 217% YoY FCF growth becomes even more impressive when we consider the ramp-up in store openings.
$HITI continues to lead the sector with best-in-class revenue per store:
• $2.6M per store vs. $1.2M industry average.
• In Ontario, the company’s key growth market, the gap is even wider: $3.5M per store vs. $1.1M from peers.
• Annualized retail sales per square foot across the Canna Cabana store network reached $1,699 in the fourth fiscal quarter of 2024, up 2% QoQ. This exceeded best-in-class retailers such as Walmart, Target, and Canadian Tire.