Discover and read the best of Twitter Threads about #Cerebras

Most recents (2)

A short thread on accelerated computing. How $TSLA's #Dojo, $GOOG's #TPU #Cerebras's #WSE-2, #Graphcore's #IPU, and others compare to $NVDA 's #GPU. This is an extract of work we published a few weeks ago. 👇1/6
It is difficult to compare chips. Designing a chip is all about managing trade-offs between multiple dimensions. A chip has limited potential resources and the architect allocates them over these multiple dimensions. 2/6 Image
With this framework in mind, one sees alternatives to #GPU have fundamentally different architectures, favoring the flow of data across the chip vs. the flow of data between the memory and the chip. 3/6 Image
Read 6 tweets
I have tried to figure out why $TSLA designed Dojo instead of using #Cerebras last night. I almost lost sleep on it and I don't have an answer yet. Any input welcome. Thread delow for where I stand.👇
1 - a fan out wafer of Dojo and a wafer-scale chip of Cerebras have similar transistor density (~2.5 Trillions). Dojo claims 9 PFLOPS BF16, Cerebras 2.5 PFLOPS half precision, which is about equivalent, I think. (3+X ratio between the two is fair-same as FP32/64 ratio on Ampere)
2 - Dojo plans to scale out to ~120 Wafers to get to Exascale. Cerebras announced yesterday at Hotchips 192. Both configurations are similar, and deliver exascale compute. Both architecture should be able to go beyond that if needed as well.
Read 7 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!