Semiconductor #VentureCapital easily bought the idea of ASICs replacing GPU for AI, based on the argument that GPUs were primarily built for graphics & would not be efficient for AI in the longer run.

Lets bust that myth (1/n)
Hard thing about Hardware is actually Software.

2016 saw a Cambrian explosion of AI chip startups raise their 1st VC rounds. 5 years later, most startups have launched their 1st gen. chip but are still struggling to build a robust SW stack to support diverse AI workloads (2/n)
NVIDIA introduced CUDA in 2006 to leverage GPUs for computation.

Since then applications in astronomy, biology, chemistry, physics, data mining, manufacturing, finance & other computationally intense fields have used CUDA to accelerate computation (3/n)
GPUs have been positioned for compute intensive workloads for a long time, much before the advent of DL (often misspoken as AI) which is just another compute intensive workload

GPU SW stack has kept on improving release-after-release over the last decade & a half (4/n)
Besides a larger than decade long lead on SW, each GPU generation has also pushed the boundary of peak compute capability - a trend which will continue (5/n)
It is premature to say ASICs will improve efficiency of AI compute because no one really knows how future AI workloads will look.

Designing highly specialized ASICs for AI efficiency is equivalent to throwing arrows in the dark right now (6/n)
DL workloads are continuously changing.

On aggressive schedule, it takes ~2 years for a chip to be spec'ed, designed, fabricated & booted on.

DL workloads get outdated every 18 months. (7/n)
In this fast changing landscape, certain startups already find themselves stuck with wrong design choices or with designs optimized for outdated workloads (8/n)
Eg: plethora of startups focused only on accelerating convolutions or matrix multiplications to later find out that DL networks had evolved to also include a fair share of memory-bound operations (9/n)
Eg: plethora of startups ruled out external memory in the name of power efficiency & cost only to later find out that DL networks had grown to the order of trillions of parameters, and hence would no longer fit within their on-chip memories (10/n)
Fast changing AI landscape needs general purpose compute platform to run diverse workloads, at least until the workloads mature.

When they do, ASICs might deliver higher efficiency for highly specialized Edge scenarios. (n/n)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Ritika Borkar

Ritika Borkar Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Ritika_Borkar

24 Aug
Fun keynote from @SkydioHQ at #hotchips last night.

Interesting learnings for architects designing compute SoCs for edge devices in #autonomous systems & #Robotics

Last two were the most revealing realizations for me (1/n)
Autonomous algorithms are still evolving.

While acceleration of key workloads is desired, it is general compute horsepower which will provide the needed flexibility to program solutions for next world challenges. (2/n)
Good software abstraction of foundational building blocks allows engineers to iterate faster with different sophisticated algorithms.

(3/n)
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(