1/n #HotChips2022 @hotchipsorg talks:

πŸ”Ό Day 1 Keynote, @Intel
@PGelsinger: Semiconductors Run The World

πŸ”Ό Day 2 Keynote, @Tesla
Ganesh Venkataramanan: Beyond Compute - Enabling AI through System Integration
2/n GPUs and HPC

β–Ά @NVIDIA Hopper, Jack Choquette
β–Ά @AMD Instinct MI200, Alan Smith
β–Ά @Intel Ponte Vecchio, Hong Jiang
β–Ά Biran BR100 GPGPU, Lingjie Xu
3/n Integration Technologies

β–Ά @LightmatterCo Photonic Wafer Scale Substrates, Nicholas Harris @theanalognick
β–Ά @IntelTech FPGA for RF, Tim Hoang
β–Ά @RANOVUS Optical ASICs, Christoph Schulien
β–Ά @SamsungDSGlobal CXL Memory Expander, Sung Joo Park
4/n Academia

β–Ά @Yale Low Power Fabric for Brain-Computer Interfaces, Abhishek Bhattacharjee
β–Ά @ETH_en SoC for Visual Proc in Nano-UAVs, Alfio Di Mauro
β–Ά @Stanford Reconfig Array SoC for Dense Linear Algebra, Kathleen Feng
β–Ά @Arm Morello, Richard Grisenthwaite
5/n Machine Learning

β–Ά @GroqInc Tensor Streaming MP, Dennis Abts
β–Ά @UntetherAI 1456 RISC-V Core At-Memory Inference, Robert Beachler
β–Ά @Tesla DOJO Microarchitecture, Emil Talpes
β–Ά @Tesla DOJO System Scaling, Bill Chang
β–Ά @CerebrasSystems HW/SW Co-Design of WSE, Sean Lie
6/n Network and Switches

β–Ά @AMD @XilinxInc 400G SmartNIC SoC, Jaideep Dastidar
β–Ά @JuniperNetworks Express 5 28.8 Tbps ASIC, Chang-Hong Wu
β–Ά @NVIDIA NVLink, Alexander Ishii
7/n ADAS and Grace

β–Ά @NVIDIA Orin, Michael Ditty
β–Ά @NODAR 3D Vision, Leaf Jiang
β–Ά @NVIDIA Grace, Jonathon Evans
8/n Mobile & Edge

β–Ά @AMD Ryzen 6000, Jim Gibney
β–Ά @Intel Meteor Lake and Arrow Lake, Wilfred Gomes
β–Ά @Mediatek Dimensity 9000, Hugh Mair
β–Ά @Intel Xeon D 2700/1700, Praveen Mosur
9/n The Day 0 Tutorials

β–Ά CXL Overview
β–Ά CXL 2/3 Coherency, Fabric
β–Ά MLIR (Multi-Level Intermediate Representation) from Google, Nod.AI, Arm, Si-Five, Microsoft
10/ That's all the talks, as of 24th May. Feels like a real chip conference this year, covering a lot of areas and not losing too much to ML. Looking forward to insights into Dojo, Optical, networking, the chip deep dives

#HotChips2022 @hotchipsorg

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with π·π‘Ÿ. πΌπ‘Žπ‘› πΆπ‘’π‘‘π‘Ÿπ‘’π‘ π‘ 

π·π‘Ÿ. πΌπ‘Žπ‘› πΆπ‘’π‘‘π‘Ÿπ‘’π‘ π‘  Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IanCutress

May 24
So, discussing all-core frequency on Ryzen 7000. We saw the demo with 5.5 GHz peak, and AMD said 5.2-5.5G was common for that game.

We are doing some napkin math about what a proper workload might be. Thread (1/n):
So certain games don't tax the CPU all that much. The code path doesn't spread out, doesn't use many execution units, and it could be a very light workload. The core power requirements might be low, and so frequency can be boosted.
As we see with CPU tests, some tests hammer the core with high IPC (P95), others with low IPC (Cinebench).

With the Ryzen 7000, let's work on core power. We'll start with this graph of core power, under a high IPC workload, for the 7nm 5950X: Image
Read 14 tweets
May 24
Dennard Scaling in action, with AMD CPU Frequencies:

β–Ά 1999: 1 GHz (Athlon K7)
β–Ά 2013: 5 GHz (FX9590)
β–Ά 2022: 5.5 GHz (Zen 4)
β–Ά 1999: 1.0 GHz (Athlon K7)
β–Ά 2003: 2.0 GHz (FX-51)
β–Ά 2006: 3.0 GHz (FX-74)
β–Ά 2011: 4.0 GHz (FX-4170)
β–Ά 2013: 5.0 GHz (FX-9590)
β–Ά 2022: 5.5 GHz (Zen 4)
β–Ά 1996: 0.1 GHz (K5-100)
β–Ά 1997: 0.2 GHz (K6 200)
β–Ά 1998: 0.3 GHz (K6 300)
β–Ά 1998: 0.4 GHz (K6-2 400)
β–Ά 1999: 0.5 GHz (K6-2 500)
β–Ά 1999: 1.0 GHz (Athlon K7)
Read 4 tweets
May 3
$AMD Q1 2022 Q&A Thread.

Q: Lot going on macro. 54-55% organic growth in 1Q. Puts and Takes? Supply? Server?
A: Strong Q1, lots going on. Strength in Q1 was broad - gain share in server, bring supply online, strong semi and C&G. Softness in PC, but shift mix ASP to premium. Into Q2, lots in play, but managed supply well, work with customers. Xilinx has high demand
Q: FY22 Guidance - expect upside over 31% organic growth? View on DC Capex and PC? AMD is being conservative in PC?
Read 49 tweets
May 3
$AMD Q1 2022, without Xilinx:

Revenue: $5.3b
⬆ 55% YoY
⬆ 10% QoQ
Gross Margin 51%
⬆ 4.8% YoY
⬆ 0.6% QoQ

$AMD inc $XLNX

Revenue: $5.9b
⬆ 71% YoY
⬆ 22% QoQ
Gross Margin 48%
⬆ 1.9% YoY
⬇ 2.4% QoQ
$AMD Q1 2022:

⬆ +109% YoY Cash + equiv ($6.5b)
⬆ +69% YoY Accounts Receivable ($3.7b)
⬆ +47% Inventories ($2.4b)
⬆ +477% Total Debt ($1.8b)
$AMD

Outlook Q2 2022:
Revenue $6.5b +/- $200m
Gross Margin 54%

Outlook FY 2022:
Revenue $26.3b, ⬆ 60% YoY
Gross Margin 54%
Read 5 tweets
May 3
#VLSI22 pt1:
Intel 4 update:

EUV + FinFet
50nm gate pitch
30nm fin pitch
40nm min metal pitch
16 metal layers
Enhanced Copper at lower layers for lower line resistance
8 VT options (4N+4P)

Claims of 2x area scaling of HP logic library, plus +20% perf at iso-power over Intel 7.
#VLSI22 Thread Part 2:
Also from Intel, Low power 6T SRAM on Intel 4:

Old 6T design:
5.8x power at 23.8 Mb/mm2
Old 8T design:
1.0x power at 13.7 Mb/mm2
New 6T design:
1.03x power at 19.4 Mb/mm2

TL;DR can now offer low power SRAM at better density. No word on latency
#VLSI22 Thread Part 3:

@IBM demonstrates 216Gb/s PAM8 with 288 mW power consumption on 4nm FinFET.
Read 5 tweets
Apr 4
It's great that Arc supports AV1 encode. But to say its great for streaming right away is not quite right.

No streaming service currently deals with a direct AV1 streaming upload iirc. We're still a few quarters (up to 2yrs+) away from that. Correct me if I'm wrong. 1/
2/ Netflix can deliver AV1 for your decode.
YouTube can deliver AV1 for your decode.
You can upload pre-recorded to YouTube in AV1.
You can't stream to YouTube in AV1.
You can't stream to twitch in AV1.
Again, correct me if I'm wrong, but Intel said this in briefings.
3/ Even if you record in AV1 offline to upload later, YouTube doesn't yet use dedicated AV1 hardware to process (wait for VCU2?).

So your AV1 pre-recorded video takes longer to convert on their backend. Only useful if you have upload limits or are hitting upload limits.
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(