Fun keynote from @SkydioHQ at #hotchips last night.

Interesting learnings for architects designing compute SoCs for edge devices in #autonomous systems & #Robotics

Last two were the most revealing realizations for me (1/n)
Autonomous algorithms are still evolving.

While acceleration of key workloads is desired, it is general compute horsepower which will provide the needed flexibility to program solutions for next world challenges. (2/n)
Good software abstraction of foundational building blocks allows engineers to iterate faster with different sophisticated algorithms.

(3/n)
Heterogenous compute requirements get mapped onto different compute units currently. (4/n)
Deep Learning at this edge is NOT limited to INT8 or lower precision operations.

Some layers still need floating point precision. (5/n)
NonLinear Optimizations which are the core of many #Robotics algorithms still get deployed on CPU.

There is an opportunity to offload these fairly compute intensive steps onto accelerators. (n/n)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Ritika Borkar

Ritika Borkar Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Ritika_Borkar

24 Aug
Semiconductor #VentureCapital easily bought the idea of ASICs replacing GPU for AI, based on the argument that GPUs were primarily built for graphics & would not be efficient for AI in the longer run.

Lets bust that myth (1/n)
Hard thing about Hardware is actually Software.

2016 saw a Cambrian explosion of AI chip startups raise their 1st VC rounds. 5 years later, most startups have launched their 1st gen. chip but are still struggling to build a robust SW stack to support diverse AI workloads (2/n)
NVIDIA introduced CUDA in 2006 to leverage GPUs for computation.

Since then applications in astronomy, biology, chemistry, physics, data mining, manufacturing, finance & other computationally intense fields have used CUDA to accelerate computation (3/n)
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(