Computer Architect at NVIDIA | Board of Director at MLCommons | MLPerf Training WG Chair | Building systems to speed-up compute (AI/ML/DL) | Opinions are my own
While acceleration of key workloads is desired, it is general compute horsepower which will provide the needed flexibility to program solutions for next world challenges. (2/n)
Aug 24, 2021 • 11 tweets • 2 min read
Semiconductor #VentureCapital easily bought the idea of ASICs replacing GPU for AI, based on the argument that GPUs were primarily built for graphics & would not be efficient for AI in the longer run.
Lets bust that myth (1/n)
Hard thing about Hardware is actually Software.
2016 saw a Cambrian explosion of AI chip startups raise their 1st VC rounds. 5 years later, most startups have launched their 1st gen. chip but are still struggling to build a robust SW stack to support diverse AI workloads (2/n)