Now for cloud native computing - scale out with containers. Benefit from density and energy efficiency. Enter #Bergamo
128 cores per socket, 82 B transistors
Uses same io die, but new 16 core core dies.
Optimised for density, not perf, but completely same isa for software compatibility. 35% smaller core physically optimized, same socket and same IO support. Same platform support as Genoa
Up to 2.6x vs comp in cloud native workloads
Double density, double efficency, vs comp
Shipping now in volume to hyperscale customers. Meta to the stage
Enablement through an open source platform via OCP. Meta is using AMD in AI
Meta can rely on AMD to deliver, time and time again
Deploying #Bergamo internally, 2.5x over Milan, substantial TCO. Easy decision. Partnered with AMD to provide design optimisations at a silicon level too
Citadel talking about workloads requiring 100k cores and 100PB databases. Moved to latest gen AMD for 35% speedup.
1 million cores *. Forrest says very few workloads require that much. So efficiency and performance matters.
Density required to be as close to the financial market as possible. Latency is key, so also xilinx in the pipeline.
Here we go. Using alveo.
Solarflare NICs for millions of trades a day. Architecture needs to be optimized together.
Same thinking drove the acquisition of #Pensando. Network complexity has exploded. Managing these resources is more complicated, especially with security.
Removing the CPU tax due to the infrastructure, before you even get to load balancing
#Pensando P4 DPU. Forrest calls it the best networking architecture team in the industry
Freeing the CPU from it's overhead
I'm in the wrong seat. Can't see any of the hardware images. Can see all the text though
SmartNICs already in the cloud. Available as VMware vSphere solutions.
New P4 DPU offload in a switch. #Pensando silicon alongside the switching silicon.
HPE Aruba switch
Enables end to end security solutions
Just says this was the first half of the presentation. So now ai
Aiaiaiaiaiaiai
Lisa back to the stage. AI is the next big megatrand
AMD wants to accelerate AI solutions at scale. AMD has #AI hardware
AMD already has lots of AI partners
$150B TAM by 2027
That includes CPU and GPU
Better photo. AMD going down the HPCxAI route.
Victor Peng to the stage!
The journey of AMD's AI stack. Proven at HPC scale
AI software platforms. Edge requires Vitis
Reminder : it's "Rock-'em". Not 'Rock-emm'.
Lots of ROCm is open source
Running 100k+ validation tests nightly on latest AI configs
New @AMD and @huggingface partnership being announced today. Instinct, radeon, ryzen, versal. AMD hardware in HF regression testing. Native optimization for AMD platforms.
Were talking training and inference. AMD hardware has advantages.
At #ISC23, @intel's Jeff McVeigh going through AI-accelerated #hpc. Either AI helping reduce large problems, or AI hardware being used for reduced precision in HPC
If you hadn't seen it, Intel's AI roadmap. Falcon Shores is the output of GPU+AI.
GAAP
Revenue YoY $5.353b, down 9%
Gross Profit $2.359b, down 16%
Gross Margin 44%, down 4pts
OpEx $2.514b, up 29%
Op Income $145m loss, down 115%
Op Margin -3%, down 19pts
Non-GAAP YoY
Revenue $5.353b, down 9%
Gross Profit $2.675b, down 14%
Gross Margin 50%, down 3pts
OpEx $1.587b, up 18%
Op Income $1.098b, down 40%
Op Margin 21%, down 10 pts
EPS $0.60, down 47%
Quarterly:
β‘οΈ Revenue $5.6 billion, up 16% YoY
β‘οΈ Gross Margin 43%, down 7% YoY
β‘οΈ Operating loss $149m, down $1.3b YoY
β‘οΈ Operating margin -3%, down 28% YoY
β‘οΈ Net Income $21m, down 98% YoY
β‘οΈ Revenue $23.6b, up 44%
β‘οΈ Gross Margin 45%, down 3%
β‘οΈ Op Expenses $9.4b, up 120%
β‘οΈ Op Income $1.2b, down 65%
β‘οΈ EPS $0.84, down 67%
β‘οΈ Growth drive by embedded and datacenter, offset by lower client and gaming.
β‘οΈ 43% GM due to amortization of Xilinx acquisition assets, non GAAP GM was 53%, +1% YoY, due to higher embedded/DC mix
β‘οΈ Operating loss also due to Xilinx