Profile picture
James Wang @jwangARK
, 8 tweets, 2 min read Read on Twitter
1/ Neural network training complexity has grown 300,000x since 2012. Yet Moore’s Law has only provided 12x more performance. So the question is, where did the extra performance come from? blog.openai.com/ai-and-compute/ @OpenAI
2/ There are three factors that drive system performance: transistor scaling, chip architecture, and chip count. Let’s see how these have changed since 2012.
3/ Moore’s Law ’75 dictates that transistor per chip doubles every 24 months (not 18!). AlexNet was powered by Fermi GPUs with 3B transistors. Nvidia Volta 2017 is 21B. Transistor scaling gave just a 7x improvement.
4/ Today’s GPUs have dedicated tensor cores, making them far more efficient at DL than classic GPUs. Volta’s 125 TFLOP / Fermi's 1.5 TFLOP = 83x speedup. Divide that by 7 (this takes out Moore’s Law) leaves 12x as the perf improvement from processor architecture.
5/ Lastly, the GPU count has grown massively. AlexNet used two GPUs. Large scale training today uses up to 256. That’s a 128x improvement.
6/ Combing the three factors: 7x (more transistors) * 12x (improved chip architecture) * 128x (more chips) = ~10,000x speedup over five years.
7/ This is within one order of magnitude of the 300,000x number from OpenAI, which is close enough since different neural nets of the same year easily have 10x difference in compute load.
8/ Conclusion: it’s not Moore’s Law sustaining DL compute growth—first it’s just more chips ($$), second it’s better chip architecture, last it’s transistor scaling.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to James Wang
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!