tradingkey.logo

Nebius Rising: Betting Big on AI Cloud

TradingKeyApr 7, 2025 1:46 AM
  • Revenue soared 462% YoY to $117.5M, with AI infrastructure up 602%, yet net losses widened to $396.9M.
  • $808M in 2024 investments fuel rapid hyperscale expansion, with 22,000 Blackwell GPUs ordered and multiple U.S./EU data centers underway.
  • NVIDIA Cloud Partner status, custom GPU clusters, and AI-first architecture differentiate Nebius from AWS, CoreWeave, and other hyperscalers.
  • EBITDA losses of $266.4M highlight the cost of scaling, though Q1 2025 margins may improve with utilization gains.

Nebius Group: Building the Future Cloud of AI in the Midst of Scale-Up Turbulence

TradingKey - In the hyperscalers' quest to capture the AI infrastructure market, there have been few names with vision, velocity, and transparency to match Nebius Group. Founded from the ashes of Yandex N.V., Nebius is establishing a new reputation as a hyperscaler cloud provider optimized for the generative AI era. Its multi-tenant, AI-native platform features custom GPU clusters, proprietary cloud architecture, and vertically integrated data center deployments, all in the service of what it calls the “AI factory of the future.” 

But behind the vision is an aggressive scale-up business with the mission to overcome massive up-front CapEx, expanding losses, and the monumental hurdle of competing with hyperscalers like AWS, Google Cloud, and CoreWeave. As the company charges towards its end-2025 goal of $1 billion ARR, the investment thesis is described in terms of asymmetric upside subject to execution, take-up and adoption of the underlying infrastructure and AI workloads, offset with high execution and funding risks

NVDA gtc

Source: Nvidia GTC

Strategic Foundation: Proprietary Cloud Design with Complete-Stack AI Infrastructure

Nebius' business model is to provide end-to-end cloud infrastructure natively designed specifically for AI builders from foundation model builders to inference platforms and application-layer startups. General-purpose compute was repurposed in traditional cloud platforms to run AI workloads, whereas Nebius was natively designed with AI workloads in mind. Its architecture indicates so: all the layers in the stack from the hardware racks to orchestration software have been internally designed to maximize compute throughput, low-latency storage and distributed training. The vertical integration gives Nebius flexibility in performance tuning for high-density GPU workloads and enables deployment to be done very rapidly in multiple geographies.

The company's core service, Nebius AI Cloud, delivers on-demand and elastic high-performance compute with dense clusters of NVIDIA GPUs, high-bandwidth storage, model deployment tools, and its in-house AI Studio platform to fine-tune and infer models on vision, text, and reasoning tasks. Nebius transitioned all its customers to its next-generation cloud in Q4 2024, natively on the NVIDIA accelerated stack. It launched Tracto.ai, a serverless compute engine for compute-intensive workloads in AI.

In addition to the core business, Nebius has three ancillary activities: Toloka (GenAI data to train GenAI models), TripleTen (AI-native ed-tech bootcamps), and Avride (autonomous delivery). All three offer optionality and synergy. Toloka, for instance, generates data streams to Nebius' AI users and TripleTen offers an AI-native audience. The crown jewel is the AI cloud platform that produces most group revenue.

With over 22,000 NVIDIA Blackwell GPUs on order and multiple U.S. and EU data centers underway, Nebius is scaling capacity at breakneck speed. Its Kansas City cluster (initial 5MW) and Paris H200 deployment are live and operating already, and its new New Jersey facility will be dedicated to running Blackwell NVL72 instances only. It places Nebius among the elite global players to be in a position to offer Blackwell compute at scale, a differentiating feature in 2025 with demand on the rise for agentic AI, LLM inference and simulation workloads.

blackwell output

Source: Nvidia

Positioning in the GPU Cloud Wars: Speed Over Scale

Where the market for AI infrastructure is controlled by hyperscalers including AWS, Microsoft Azure, and Google Cloud, the GPU cloud subsegment surprisingly enough has numerous competitors including CoreWeave, Lambda, Vultr, and new entrants like Together AI. It is not just the availability of NVIDIA’s latest chips that sets Nebius apart, but its role as a Reference Platform NVIDIA Cloud Partner. Only very few firms in the world have received such a designation, and it signifies certified conformance to NVIDIA’s end-to-end cloud standards from software orchestration to tuning at the hardware level.

On the cost front, SemiAnalysis places Nebius' compute at a considerable discount to traditional hyperscalers because of its server architecture and power efficiency. Additionally, its differentiated business model, AI-native research laboratories and startup, allows Nebius to provide customization beyond that provided in general-purpose cloud offerings. That includes scheduling flexibility in spot instances, accelerated deployment schedules, and custom SLAs on mission-critical inference workloads.

Customer acquisition remains a problem, though. CoreWeave has had solid traction with enterprise deals like Microsoft and Inflection, but Nebius is in go-to-market build. Management attributed December 2024 ARR of $90 million to being less than expected due to timing on the revenue ramps in the pipeline of sales. March 2025 ARR will be well in excess of $220 million on contracted revenue and its $750 million to $1 billion December 2025 ARR target remains intact.

The competitive threat here is clear: hyperscalers have scale, CoreWeave has deep integration partnerships, and new entrants like Together AI have bleeding-edge features. Nebius must leverage its technical superiority and alignment with NVIDIA to be successful in niches where speed, customization, and flexibility in pricing will be most valued.

nvda gpuSource: Nebius NVDA GPUs Service

Growth, Scaling, and Burning Out Your Capital

Nebius' topline growth was remarkable. FY 2024 revenue surged 462 percent year-on-year to $117.5 million with the AI infrastructure business up 602 percent. Q4 revenue alone hit $37.9 million, up 466 percent. Beneath the surface, the scale-up story is capital-intensive. 2024 adjusted EBITDA was negative $266.4 million and net losses grew to $396.9 million. Loss from operations was $443.5 million for the year and points to the cost of cloud growth, share-based comp, and sales ramp.

CapEx describes the strategy. Nebius invested $808 million in capital expenditures in 2024, $417 million in Q4 alone. The company is deploying GPU clusters in the U.S. and Europe at hyperscaler velocity with the benefit of its $2.45 billion cash balance and a $700 million December 2024 credit from Nvidia, Accel, and Orbis. The capital will be applied to deploy additional data centers, including a Blackwell-dedicated facility in New Jersey and a high-density cluster in Finland.

Monetarily speaking, Nebius has three revenue streams: GPU rental and compute services, fine-tuning and inference using AI Studio tools, and managed services like Tracto.ai. Its platform hosted the latest LLMs including DeepSeek R1 with 30x benchmark throughput increases under NVIDIA Dynamo in January 2025. Blackwell NVL72 and HGX B200 instances have pre-orders up already.

Efficiency is a mixed tale. Gross margins continue to be compressed from data center ramp and hardware purchasing, and Q4 losses contain substantial non-cash expenses such as $40.9 million in SBC. Nevertheless, interest income of $63.6 million in 2024 helps to offset burn, and Q1 2025 should have better margin contribution with improving utilization. Customer churn remains low, and AI-native customers scale spend linearly with workload intensity and have good revenue visibility upon onboarding.

Nebius earnings

Source: Q4-Deck

Valuation: Early-Stage Infrastructure with HyperScaler DNA

To put a value on Nebius, one must understand its two-pronged identity: a capital-intensive hyperscaler with budding revenue. On a trailing basis, its FY24 revenue of $117.5 million and market capitalization of $3.26 billion work out to an EV/Revenue multiple in excess of 27x. It drops sharply on a forward ARR basis, though. Management guides to $750 million to $1 billion ARR in Dec 2025, implying a forward EV/ARR range of 3.2x to 4.3x, in line with peers CoreWeave and Lambda (normalized to private market comps).

A simple DCF with $1 billion 2025 ARR, 35 percent gross margins in the long-term and 25 percent FCF margin in 2027 (discounted at 12 percent) yields an intrinsic value range between $3.5 billion and $4.2 billion. With its cash position and execution potential, Nebius is trading at a moderate premium to intrinsic value, appropriate for a platform with high optionality. Upside to a $6–7 billion valuation in 2026 if Blackwell demand ramps in line with expectations and Nebius captures only 2 percent global AI infra spend is feasible.

However, the investors have to view Nebius from a VC-type lens: a bet on scale in infrastructure, network effects in AI workloads, and intelligent capital deployment. Dilution risk is offset by treasury from the sale of Yandex that can be used in incentive alignment or secondary capital raising.

Risks of Execution and Regulatory Overhangs

The primary risks here are bottlenecks in acquiring customers, data center buildouts with delays, GPU supply restrictions, and significant operating losses. While March ARR is ahead of schedule, onboarding or workload migration disruption would put 2025 guidance under pressure. Additionally, Nebius’s dependence on NVIDIA (for both hardware and software optimizations) introduces vendor concentration risk that is mitigated to the extent of their partnership strategy.

The company also carries geopolitical overhangs. Although fully divested from Russia operations, there are residual reputational risks and regulatory scrutiny. Nebius had $180.9 million in contingent tax liabilities in Q4 related to share repurchases under Dutch tax law. Share-based compensation can also dilute the shareholders in the long term if recovery in share prices is slow. At the operational level, growing a 400-person global cloud business entails coordination risks between R&D centers in Amsterdam, Tel Aviv, and New York. The execution quality in site management and latency and customer success will decide whether Nebius can viably compete with hyperscalers.

altText

Source: Morgan Stanley

Conclusion: 

A Purposeful Gamble on the Infrastructure Railroads of AI Nebius sits at the intersection of compute commoditization and purpose-built workloads for AI. Its vertical integration approach to building AI-native infrastructure puts it well in a very growth-oriented niche. But the journey to hyperscaler applicability is paved with burn, dilution, and execution risk. Those who expect exponential growth in the use of AI and value full-stack platform ownership get asymmetric upside, provided that it continues to hit milestones and convert pipeline to recurring revenue.

Disclaimer: The content of this article solely represents the author's personal opinions and does not reflect the official stance of Tradingkey. It should not be considered as investment advice. The article is intended for reference purposes only, and readers should not base any investment decisions solely on its content. Tradingkey bears no responsibility for any trading outcomes resulting from reliance on this article. Furthermore, Tradingkey cannot guarantee the accuracy of the article's content. Before making any investment decisions, it is advisable to consult an independent financial advisor to fully understand the associated risks.

Recommended Articles

tradingkey.logo
tradingkey.logo
Intraday Data provided by Refinitiv and subject to terms of use. Historical and current end-of-day data provided by Refinitiv. All quotes are in local exchange time. Real-time last sale data for U.S. stock quotes reflect trades reported through Nasdaq only. Intraday data delayed at least 15 minutes or per exchange requirements.
* References, analysis, and trading strategies are provided by the third-party provider, Trading Central, and the point of view is based on the independent assessment and judgement of the analyst, without considering the investment objectives and financial situation of the investors.
Risk Warning: Our Website and Mobile App provides only general information on certain investment products. Finsights does not provide, and the provision of such information must not be construed as Finsights providing, financial advice or recommendation for any investment product.
Investment products are subject to significant investment risks, including the possible loss of the principal amount invested and may not be suitable for everyone. Past performance of investment products is not indicative of their future performance.
Finsights may allow third party advertisers or affiliates to place or deliver advertisements on our Website or Mobile App or any part thereof and may be compensated by them based on your interaction with the advertisements.
© Copyright: FINSIGHTS MEDIA PTE. LTD. All Rights Reserved.