While acceleration of key workloads is desired, it is general compute horsepower which will provide the needed flexibility to program solutions for next world challenges. (2/n)
Good software abstraction of foundational building blocks allows engineers to iterate faster with different sophisticated algorithms.
(3/n)
Heterogenous compute requirements get mapped onto different compute units currently. (4/n)
Deep Learning at this edge is NOT limited to INT8 or lower precision operations.
Some layers still need floating point precision. (5/n)
NonLinear Optimizations which are the core of many #Robotics algorithms still get deployed on CPU.
There is an opportunity to offload these fairly compute intensive steps onto accelerators. (n/n)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Semiconductor #VentureCapital easily bought the idea of ASICs replacing GPU for AI, based on the argument that GPUs were primarily built for graphics & would not be efficient for AI in the longer run.
Lets bust that myth (1/n)
Hard thing about Hardware is actually Software.
2016 saw a Cambrian explosion of AI chip startups raise their 1st VC rounds. 5 years later, most startups have launched their 1st gen. chip but are still struggling to build a robust SW stack to support diverse AI workloads (2/n)
NVIDIA introduced CUDA in 2006 to leverage GPUs for computation.
Since then applications in astronomy, biology, chemistry, physics, data mining, manufacturing, finance & other computationally intense fields have used CUDA to accelerate computation (3/n)