Now for cloud native computing - scale out with containers. Benefit from density and energy efficiency. Enter #Bergamo
128 cores per socket, 82 B transistors
Uses same io die, but new 16 core core dies.
Optimised for density, not perf, but completely same isa for software compatibility. 35% smaller core physically optimized, same socket and same IO support. Same platform support as Genoa
Up to 2.6x vs comp in cloud native workloads
Double density, double efficency, vs comp
Shipping now in volume to hyperscale customers. Meta to the stage
Enablement through an open source platform via OCP. Meta is using AMD in AI
Meta can rely on AMD to deliver, time and time again
Deploying #Bergamo internally, 2.5x over Milan, substantial TCO. Easy decision. Partnered with AMD to provide design optimisations at a silicon level too
Citadel talking about workloads requiring 100k cores and 100PB databases. Moved to latest gen AMD for 35% speedup.
1 million cores *. Forrest says very few workloads require that much. So efficiency and performance matters.
Density required to be as close to the financial market as possible. Latency is key, so also xilinx in the pipeline.
Here we go. Using alveo.
Solarflare NICs for millions of trades a day. Architecture needs to be optimized together.
Same thinking drove the acquisition of #Pensando. Network complexity has exploded. Managing these resources is more complicated, especially with security.
Removing the CPU tax due to the infrastructure, before you even get to load balancing
#Pensando P4 DPU. Forrest calls it the best networking architecture team in the industry
Freeing the CPU from it's overhead
I'm in the wrong seat. Can't see any of the hardware images. Can see all the text though
SmartNICs already in the cloud. Available as VMware vSphere solutions.
New P4 DPU offload in a switch. #Pensando silicon alongside the switching silicon.
HPE Aruba switch
Enables end to end security solutions
Just says this was the first half of the presentation. So now ai
Aiaiaiaiaiaiai
Lisa back to the stage. AI is the next big megatrand
AMD wants to accelerate AI solutions at scale. AMD has #AI hardware
AMD already has lots of AI partners
$150B TAM by 2027
That includes CPU and GPU
Better photo. AMD going down the HPCxAI route.
Victor Peng to the stage!
The journey of AMD's AI stack. Proven at HPC scale
AI software platforms. Edge requires Vitis
Reminder : it's "Rock-'em". Not 'Rock-emm'.
Lots of ROCm is open source
Running 100k+ validation tests nightly on latest AI configs
New @AMD and @huggingface partnership being announced today. Instinct, radeon, ryzen, versal. AMD hardware in HF regression testing. Native optimization for AMD platforms.
Were talking training and inference. AMD hardware has advantages.
$INTC margins crater for 2024 Q3.
DCAI/NEX up, rest down π§΅
vs 23Q3
π΅ Revenue $13.3b, down 6% (Guide 13b)
π Gross Margin 15% GAAP, down 27.5pp (Guide 34.5)
π° Net Income -$16.6b, down from $0.3b
πͺ EPS -$3.88, down from $0.07
β‘οΈFoundry $4.35b, down 8% from $4.83b
β‘οΈDCAI $3.35b, up 9% from $3.08b
β‘οΈCCG $7.33b, down 7% from $7.87b
β‘οΈNEX $1.51b, up 4% from $1.45b
β‘οΈAltera $0.41b, down 44% from $0.74b
β‘οΈMBLY $0.48b, down 8% from $0.53b
Employee count, as of Sep 28, is at 124.1k - down 1200 from last quarter. Not quite the 15k announced quite yet.
$AMD hits a record quarter for 2024 Q3. Their best ever. π§΅
π΅ Revenue $6.819b, up 18% YoY
π Gross Margin 54%/50%, up 3pp YoY
π° Op Income $724m, up from 224m
πͺ EPS $0.47, up 161%
Outlook:
π΅ Revenue $7.5b, +- $300m
π GM 54%
Datacenter - EPYC, Instinct
β‘οΈ Revenue $3.549b, up 122% YoY from $1.598b
β‘οΈ Operating Income $1.041b, up from $0.306b
β‘οΈ Operating Margin 29% , up from 19% YoY
Launched Turin, MI325X. Strong cloud pickup on MI300X, announced the acquisition of ZT Systems.
Client - Ryzen, Ryzen AI
β‘οΈRevenue $1.881b, up 29% YoY from $1.453b
β‘οΈOperating Income is $276m, up from $140m YoY
β‘οΈOperating Margin is 15%, up from 10% YoY
New Ryzen AI 300 mobile devices, ramped Ryzen 9000 desktop, X3D due 7th Nov.