Here for @AMD DC event. Starts at 10am PT, follow this thread along with the stream!🧡

I expect to see @LisaSu, Mark Papermaster, Forrest Norrod, and Victor Peng on stage talking about #AI, #Bergamo, and #MI300

youtube.com/live/l3pe_qx95… Image
I'm with these goobers!
@dylan522p @PaulyAlcorn @Patrick1Kennedy Image
Lisa on stage
Optimizing for different workloads in the DC, Inc AI Image
Focused on building industry standard CPUs. Now the standard in the cloud. 640 epyc instances available in the cloud today Image
Genoa in November, 96 cores, pcie gen 5, CXL Image
Enterpiese leadership in industry standard workloads ImageImage
Power efficiency is #1 on industry standard efficiency tests Image
Best CPU for AI in the market in TPCx-AI vs comp Image
Details on these tests likely to be in the backend of the slide deck
AWS to the stage Image
Happy Lisa Image
AWS Nitro + 4th Gen epyc. I think this is a #Bergamo comment Image
New M7a instances. Best price/perf x86 EC2 instance. Video transcoding, simulation, BF16 Image
AMD uses these instances internally for data analytics workloads Image
Expanding to EDA too
New #Oracle instances in July for #Genoa Image
But future workloads need optimised infra Image
Now for cloud native computing - scale out with containers. Benefit from density and energy efficiency. Enter #Bergamo ImageImage
128 cores per socket, 82 B transistors Image
Uses same io die, but new 16 core core dies. Image
Optimised for density, not perf, but completely same isa for software compatibility. 35% smaller core physically optimized, same socket and same IO support. Same platform support as Genoa ImageImageImage
Up to 2.6x vs comp in cloud native workloads Image
Double density, double efficency, vs comp Image
Shipping now in volume to hyperscale customers. Meta to the stage Image
Enablement through an open source platform via OCP. Meta is using AMD in AI Image
Meta can rely on AMD to deliver, time and time again Image
Deploying #Bergamo internally, 2.5x over Milan, substantial TCO. Easy decision. Partnered with AMD to provide design optimisations at a silicon level too Image
Time for #Genoa-X. Dan mcnamara to the stage ImageImage
This is all technical computing. Image
2nd gen vcache for over 1GB L3 per socket Image
Four new skus, 16-96 cores, available today Image
Xeon 8490H vs Genoa-X Image
These slides went fast. Azure in stage to talk about HPC Image
Ansys fluent 3.6x over first gen epyc using Milan-X Image
Memory optimized HX instances with Genoa-X. Image
Customer adoption, Petronas (tie in with Mercedes F1?). Looks like oil and gas getting back in the limelight as an important vertical Image
GA on azure for #Genoa-X Image
Now for #siena. Coming later this year
Citadel talking about workloads requiring 100k cores and 100PB databases. Moved to latest gen AMD for 35% speedup. Image
1 million cores *. Forrest says very few workloads require that much. So efficiency and performance matters.
Density required to be as close to the financial market as possible. Latency is key, so also xilinx in the pipeline.
Here we go. Using alveo. Image
Solarflare NICs for millions of trades a day. Architecture needs to be optimized together. Image
Same thinking drove the acquisition of #Pensando. Network complexity has exploded. Managing these resources is more complicated, especially with security. Image
Removing the CPU tax due to the infrastructure, before you even get to load balancing Image
#Pensando P4 DPU. Forrest calls it the best networking architecture team in the industry Image
Freeing the CPU from it's overhead Image
I'm in the wrong seat. Can't see any of the hardware images. Can see all the text though Image
SmartNICs already in the cloud. Available as VMware vSphere solutions.
New P4 DPU offload in a switch. #Pensando silicon alongside the switching silicon. Image
HPE Aruba switch Image
Enables end to end security solutions Image
Just says this was the first half of the presentation. So now ai Image
Aiaiaiaiaiaiai Image
Lisa back to the stage. AI is the next big megatrand Image
AMD wants to accelerate AI solutions at scale. AMD has #AI hardware ImageImage
AMD already has lots of AI partners Image
$150B TAM by 2027 Image
That includes CPU and GPU Image
Better photo. AMD going down the HPCxAI route. Image
Victor Peng to the stage! Image
The journey of AMD's AI stack. Proven at HPC scale Image
AI software platforms. Edge requires Vitis Image
Reminder : it's "Rock-'em". Not 'Rock-emm'.
Lots of ROCm is open source ImageImage
Running 100k+ validation tests nightly on latest AI configs
Pytorch founder to the stage Image
I'm excited about #MI300 - pytorch founder Image
Day 0 support for #ROCm on PyTorch 2.0 Image
Not every model is an LLM Image
@huggingface CEO on the stage. Image
New @AMD and @huggingface partnership being announced today. Instinct, radeon, ryzen, versal. AMD hardware in HF regression testing. Native optimization for AMD platforms. Image
Were talking training and inference. AMD hardware has advantages.
Still waiting for #MI300! Image
Lisa back to the stage. Image
At the center is GPU Image
New compute engine on CDNA3 Image
Now sampling #MI300A Image
13 chiplets Image
Can replace CPU chiplets for GPU only version Image
ImageImage
Image
So you replace 3 CPU chiplets with 2 GPU chiplets, add in more HBM for a total of 192GB of HBM3. That's 5.2 TB/sec of mem bandwidth.

153 BILLION TRANSISTORS.
That's #MI300X from @AMD Image
H100 only has 80GB. Means AMD has better scaling, better TCO, reduces overhead. Image
8x #MI300X in OCP infrastructure for open standards. Accelerates TTM and decreases dev costs. Easy to implement. $AMD Image
#MI300X for LLMs Image
#MI300A available today.
#MI300X available Q3
Ramping in Q4 Image
That's a wrap for today. More sessions later, not sure about embargoes, but will say what I can when I can! ImageImage

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with π·π‘Ÿ. πΌπ‘Žπ‘› πΆπ‘’π‘‘π‘Ÿπ‘’π‘ π‘ 

π·π‘Ÿ. πΌπ‘Žπ‘› πΆπ‘’π‘‘π‘Ÿπ‘’π‘ π‘  Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IanCutress

Feb 21
Here at the @intel #ifs direct connect event. Keynote time! Image
Pat to the stage. He's been CEO for 3 years now Image
Restore the company noyce and Moore built. Time to rebuild western manufacturing. Today is a day in that mission. Image
Read 64 tweets
Dec 4, 2023
We need more storage. Let's ask ChatGPT and DALL-E to make the highest capacity hard drive even higher in capacity.

Stage 1. Fairly mundane. Image
Stage 2. Looks a little more steampunk. Image
Stage 3: This one has a built in wifi-card and flash. It even has the CE mark now, look. Image
Read 10 tweets
Oct 24, 2023
Time for @qualcomm's #SnapdragonSummit! 🧡 Image
Don to the stage Image
14m snapdragon insiders Image
Read 40 tweets
Sep 19, 2023
Here is @Intel Innovation, about to begin. Pat is doing sports. He's the AI CEO. #IntelInnovation a thread

Image
Image
Image
@intel Intel is making AI accessible Image
@intel Straight into the demo Image
Read 96 tweets
Aug 1, 2023
$AMD 23Q2:

➑️ 2Q Rev $5.4b
- down 18% YoY
- flat QoQ
➑️ GM 46%, flat YoY

➑️ Data Center $1.3b
- down 11% YoY
- up 2% QoQ
➑️Client $997m
- down 54% YoY
- up 35% QoQ
➑️Gaming $1.6b
- down 4% YoY
- down 10% QoQ
➑️Embedded $1.5b
- up 16% YoY
- down 7% QoQ
Overall strong results vs expectation, but operating loss of $20m, yet net income gain of $27m. A mix of weakness in some markets and good strength in others.

Also, $135m to expand adaptive computing research operations in Ireland.
So Data Center:

➑️ Revenue $1.3b
- lower 3rd Gen EPYC sales
-- Enterprise demand was soft
-- Cloud inventory was elevated
- But revenue up 2% QoQ
-- 4th Gen EPYC CPU sales doubled
-- offset a decline in adaptive SoC DC
- MI300A and MI300X are sampling to HPC, cloud, and AI
Read 57 tweets
May 22, 2023
At #ISC23, @intel's Jeff McVeigh going through AI-accelerated #hpc. Either AI helping reduce large problems, or AI hardware being used for reduced precision in HPC ImageImage
If you hadn't seen it, Intel's AI roadmap. Falcon Shores is the output of GPU+AI. Image
GPU Max.

128 Xe Cores
128 GB HBM2e
52 TF FP64
839 TOP BF16 Image
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(