We had an exceptional quarter. Record Q2 revenue of $13.51 billion was up 88% sequentially and up 101% year-on-year and above our outlook of $11 billion.
Data Center compute revenue nearly tripled YoY, driven primarily by accelerating demand from cloud service providers and large consumer Internet companies for our HGX platform, the engine of generative AI and large language models.
Data Center revenue was a record, up 171% YoY and up 141% QoQ, led by cloud service providers and large consumer internet companies. Strong demand for the NVIDIA HGX platform based on our Hopper and Ampere GPU architectures was primarily driven by the development of large language models and generative AI.
Data Center Compute grew 195% YoY and 157% QoQ, largely reflecting the strong ramp of our Hopper-based HGX platform. Networking was up 94% YoY and up 85% QoQ, primarily on strong growth in InfiniBand infrastructure to support our HGX platform.
โฆour large CSPs are contributing a little bit more than 50% of our revenue within Q2. And the next largest category will be our consumer Internet companies. And then the last piece of that will be our enterprise and high-performance computing
Gaming revenue was up 22% YoY and up 11% QoQ, primarily reflecting demand for our GeForce RTX 40 Series GPUs based on the NVIDIA Ada Lovelace architecture following normalization of channel inventory levels.
We believe global end demand has returned to growth after last year's slowdown. We have a large upgrade opportunity ahead of usโฆQ2 and Q3 as the stronger quarters of the year, reflecting the back-to-school and holiday build schedules for laptops.
Professional Visualization revenue was down 24% YoY and up 28% QoQ. The YoY decrease primarily reflects lower sell-in to partners following normalization of channel inventory levels. The sequential increase was primarily due to stronger enterprise workstation demand and the ramp of NVIDIA RTX products based on the Ada Lovelace Architecture.
Automotive revenue was up 15% YoY and down 15% QoQ. The YoY increase was primarily driven by sales of self-driving platforms. The sequential decrease primarily reflects lower overall auto demand, particularly in China.
we do not anticipate that additional export restrictions on our Data Center GPUs, if adopted, would have an immediate material impact to our financial results. However, over the long term, restrictions prohibiting the sale of our Data Center GPUs to China, if implemented, will result in a permanent loss and opportunity for the U.S. industry to compete and lead in one of the world's largest markets.
Major companies, including AWS, Google Cloud, Meta, Microsoft Azure and Oracle Cloud as well as a growing number of GPU cloud providers are deploying, in volume, HGX systems based on our Hopper and Ampere architecture Tensor Core GPUs. Networking revenue almost doubled year-on-year, driven by our end-to-end InfiniBand networking platform, the gold standard for AI.
that general purpose computing is just not and brute forcing general purpose computing. Using general purpose computing at scale is no longer the best way to go forward. It's too energy costly, it's too expensive, and the performance of the applications are too slow.
Going forward, the best way to invest in the data center is to divert the capital investment from general purpose computing and focus it on generative AI and accelerated computing.
There is tremendous demand for NVIDIA accelerated computing and AI platforms. Our supply partners have been exceptional in ramping capacity to support our needs. Our data center supply chain, including HGX with 35,000 parts and highly complex networking has been built up over the past decade.
With the NVIDIA NeMo platform for developing large language models, enterprises will be able to make custom LLMs for advanced AI services, including chatbot, search and summarization, right from the Snowflake Data Cloud. Virtually, every industry can benefit from generative AI.
We have also developed and qualified additional capacity and suppliers for key steps in the manufacturing process such as packaging. We expect supply to increase each quarter through next year. By geography, data center growth was strongest in the U.S. as customers direct their capital investments to AI and accelerated computing.
We are seeing some of the earliest applications of generative AI in marketing, media and entertainment.
WPP, the world's largest marketing and communication services organization, is developing a content engine using NVIDIA Omniverse to enable artists and designers to integrate generative AI into 3D content creation.
Visual content provider Shutterstock is also using NVIDIA Picasso to build tools and services that enables users to create 3D scene background with the help of generative AI.
AI Lighthouse unites the ServiceNow enterprise automation platform and engine with NVIDIA accelerated computing and with Accenture consulting and deployment services.
We also announced new NVIDIA AI enterprise-ready servers featuring the new NVIDIA L40S GPU built for the industry standard data center server ecosystem and BlueField-3 DPU data center infrastructure processor.
L40S' focus is to be able to fine-tune models, fine-tune pretrained models, and it'll do that incredibly well. It has a transform engine. It's got a lot of performance. You can get multiple GPUs in a server. It's designed for hyperscale scale-out, meaning it's easy to install L40S servers into the world's hyperscale data centers. It comes in a standard rack, standard server, and everything about it is standard.
A new computing era has begun. Companies worldwide are transitioning from general-purpose to accelerated computing and generative AI.
NVIDIA GPUs connected by our Mellanox networking and switch technologies and running our CUDA AI software stack make up the computing infrastructure of generative AI.
And it's taken us 2 decades to get here. But what I would characterize as probably our -- the elements of our company, if you will, are several. I would say number 1 is architecture. The flexibility, the versatility and the performance of our architecture makes it possible for us to do all the thingsโฆfrom data processing to training to inference, for preprocessing of the data before you do the inference to the post processing of the data, tokenizing of languages so that you could then train with it.
What makes NVIDIA special are: one, architecture. NVIDIA accelerates everything from data processing, training, inference, every AI model, real-time speech to computer vision and giant recommenders to vector databases. The performance and versatility of our architecture translates to the lowest data center TCO and best energy efficiency.
Two, installed base. NVIDIA has hundreds of millions of CUDA-compatible GPUs worldwide. Developers need a large installed base to reach end users and grow their business. NVIDIA is the developer's preferred platform. More developers create more applications that make NVIDIA more valuable for customers.
Three, reach. NVIDIA is in clouds, enterprise data centers, industrial edge, PCs, workstations, instruments and robotics.
Strong networking growth was driven primarily by InfiniBand infrastructure to connect HGX GPU systems. Thanks to its end-to-end optimization and in-network computing capabilities, InfiniBand delivers more than double the performance of traditional Ethernet for AIโฆ.only InfiniBand can scale to hundreds of thousands of GPUs. It is the network of choice for AI practitioners.
The world has something along the lines of about $1 trillion worth of data centers installed, in the cloud, in enterprise and otherwise. And that $1 trillion of data centers is in the process of transitioning into accelerated computing and generative AI. We're seeing 2 simultaneous platform shifts at the same time. One is accelerated computing.
enabled by accelerated compute and generative AI came along. And this incredible application now gives everyone 2 reasons to transition to do a platform shift from general purpose computing, the classical way of doing computing, to this new way of accelerated computing. It's about $1 trillion worth of data centers, call it, $0.25 trillion of capital spend each year.
You're seeing the data centers around the world are taking that capital spend and focusing it on the 2 most important trends of computing today, accelerated computing and generative AI. And so I think this is not a near-term thing.
We have excellent visibility through the year and into next year. And we're already planning the next-generation infrastructure with the leading CSPs and data center builders. If you think about the demand, the world is transitioning from general-purpose computing to accelerated computing. That's the easiest way to think about the demand.
โฆoutlook for 3Q24 2024. Demand for our Data Center platform where AI is tremendous and broad-based across industries on customers. Our demand visibility extends into next year. Our supply over the next several quarters will continue to ramp as we lower cycle times and work with our supply partners to add capacity.
It is an exciting time for NVIDIA, our customers, partners and the entire ecosystem to drive this generational shift in computing.
โก๏ธ Final Takeaways on NVIDIA $NVDA:
NVIDIA is at the pivot of the shift from general purpose computing to accelerated computing and gen AI with significantly stronger use cases than the previous Crypto/De-Fi/NFT hype.
Strong visibility and broad-based demand reflects the start of a potential longer-term shift that NVIDIA with its strong architecture moat of CUDA, InfiniBand, Mellanox have taken decades to build.
The seemingly โexpensiveโ valuations seem less expensive now, and increasingly more shareholder returns will be via share buybacks (almost Apple like). We continue to own NVIDIA.
โข โข โข
Missing some Tweet in this thread? You can try to
force a refresh
โSince NA segment operating margins bottomed out in Q1 of 2022, we have seen 5 consecutive quarters of improvement, with 2nd quarter operating margin of 3.9%.โฆ https://t.co/v5N0AwYFantwitter.com/i/web/status/1โฆ