Now for cloud native computing - scale out with containers. Benefit from density and energy efficiency. Enter #Bergamo
128 cores per socket, 82 B transistors
Uses same io die, but new 16 core core dies.
Optimised for density, not perf, but completely same isa for software compatibility. 35% smaller core physically optimized, same socket and same IO support. Same platform support as Genoa
Up to 2.6x vs comp in cloud native workloads
Double density, double efficency, vs comp
Shipping now in volume to hyperscale customers. Meta to the stage
Enablement through an open source platform via OCP. Meta is using AMD in AI
Meta can rely on AMD to deliver, time and time again
Deploying #Bergamo internally, 2.5x over Milan, substantial TCO. Easy decision. Partnered with AMD to provide design optimisations at a silicon level too
Citadel talking about workloads requiring 100k cores and 100PB databases. Moved to latest gen AMD for 35% speedup.
1 million cores *. Forrest says very few workloads require that much. So efficiency and performance matters.
Density required to be as close to the financial market as possible. Latency is key, so also xilinx in the pipeline.
Here we go. Using alveo.
Solarflare NICs for millions of trades a day. Architecture needs to be optimized together.
Same thinking drove the acquisition of #Pensando. Network complexity has exploded. Managing these resources is more complicated, especially with security.
Removing the CPU tax due to the infrastructure, before you even get to load balancing
#Pensando P4 DPU. Forrest calls it the best networking architecture team in the industry
Freeing the CPU from it's overhead
I'm in the wrong seat. Can't see any of the hardware images. Can see all the text though
SmartNICs already in the cloud. Available as VMware vSphere solutions.
New P4 DPU offload in a switch. #Pensando silicon alongside the switching silicon.
HPE Aruba switch
Enables end to end security solutions
Just says this was the first half of the presentation. So now ai
Aiaiaiaiaiaiai
Lisa back to the stage. AI is the next big megatrand
AMD wants to accelerate AI solutions at scale. AMD has #AI hardware
AMD already has lots of AI partners
$150B TAM by 2027
That includes CPU and GPU
Better photo. AMD going down the HPCxAI route.
Victor Peng to the stage!
The journey of AMD's AI stack. Proven at HPC scale
AI software platforms. Edge requires Vitis
Reminder : it's "Rock-'em". Not 'Rock-emm'.
Lots of ROCm is open source
Running 100k+ validation tests nightly on latest AI configs
New @AMD and @huggingface partnership being announced today. Instinct, radeon, ryzen, versal. AMD hardware in HF regression testing. Native optimization for AMD platforms.
Were talking training and inference. AMD hardware has advantages.
β‘οΈ Data Center $1.3b
- down 11% YoY
- up 2% QoQ
β‘οΈClient $997m
- down 54% YoY
- up 35% QoQ
β‘οΈGaming $1.6b
- down 4% YoY
- down 10% QoQ
β‘οΈEmbedded $1.5b
- up 16% YoY
- down 7% QoQ
Overall strong results vs expectation, but operating loss of $20m, yet net income gain of $27m. A mix of weakness in some markets and good strength in others.
Also, $135m to expand adaptive computing research operations in Ireland.
So Data Center:
β‘οΈ Revenue $1.3b
- lower 3rd Gen EPYC sales
-- Enterprise demand was soft
-- Cloud inventory was elevated
- But revenue up 2% QoQ
-- 4th Gen EPYC CPU sales doubled
-- offset a decline in adaptive SoC DC
- MI300A and MI300X are sampling to HPC, cloud, and AI
At #ISC23, @intel's Jeff McVeigh going through AI-accelerated #hpc. Either AI helping reduce large problems, or AI hardware being used for reduced precision in HPC
If you hadn't seen it, Intel's AI roadmap. Falcon Shores is the output of GPU+AI.