Shail Dave Profile picture
Dec 31, 2021 12 tweets 17 min read Read on X
#Highlights2021 for me: our #survey on efficient processing of #sparse and compressed tensors of #ML/#DNN models on #hardware accelerators published in @ProceedingsIEEE.
Paper: dx.doi.org/10.1109/JPROC.…
arXiv: arxiv.org/abs/2007.00864
RT/sharing appreciated. 🧵
Context: Tensors of ML/DNN are compressed by leveraging #sparsity, #quantization, shape reduction. We summarize several such sources of sparsity & compression (§3). Sparsity is induced in structure while pruning & it is unstructured inherently for various applications or sources. Various sources induce stru...Common structures of sparsi...
Likewise, leveraging value similarity or approximate operations could yield irregularity in processing. Also, techniques for size-reduction make tensors asymmetric-shaped. Hence, special mechanisms can be required for efficient processing of sparse and irregular computations.
Accelerators leverage #sparsity differently, improving only #memory #footprint, #energy efficiency, and/or #performance. Underlying mechanisms determine #accelerator’s ability of exploiting static/dynamic sparsity of single or multiple #tensors through #inference or #learning §4 Accelerators  leverage spar...Underlying HW/SW mechanisms...Abstract view of accelerato...
Efficiently processing sparsity needs HW/SW mechanisms to store, extract, communicate, compute, load-balance only non-0s. For each, different solutions' efficacy vary across sparsity levels, patterns. Their analysis span across §5-11 + overall speedups for recent DNNs analyzed §4 Overview of accelerator sys...Analysis of speedup obtaine...
#Survey discusses such mechanisms spanning across #circuits, #computerarchitecture, mapping, #DL model pruning; #accelerator-aware #DNNs model pruning; techniques for data extraction & #loadbalancing of effectual computations; #sparsity-aware dataflows; #compilers support. Efficient processing of spa...
Structured #sparsity, especially coarse-grain, can lead to simpler #hardware mechanisms and low #encoding overheads. We also analyzed how various sparsity and tensor shapes of #DNN operators impact data reuse and execution metrics. Impact of sparsity and vari...Processing of locally dense...Storage efficiency of vario...Characteristics of differen...
#Accelerators employ #approximatecomputing by leveraging similarity of temporal and spatial data for #computervision and #NLP applications; #Reconfigurable mechanisms can enable processing a wide range of sparsity, precisions, tensor shapes. #FPGA Approximate computing by le...Reconfigurable mechanisms a...
Trends & directions: jointly exploring compression techniques, #hardware-aware compression & #NAS/#AutoML, accelerator/model #codesign, coarse-grain structured sparsity, automating HW design modeling & implementation for compact models, #compiler support, accelerating training. Overall trends and future d...Codesigns across hardware a...Design automation framework...
Survey also describes common techniques in accelerator design like for balancing compute w/ on/off-chip communication, approximate computing & advances such as reconfigurable NoCs, PEs for asymmetric or variable-precision processing. Please share it w/ whom you think can benefit.
Looking forward to a safe and productive 2022 for everyone. Best wishes for a happy 2022!
@SCAI_ASU @CompArchSA @PhDVoice @OpenAcademics @Underfox3 @ogawa_tter @jonmasters @ASUEngineering @AcademicChatter #PhDgenie @PhDForum @hapyresearchers
requesting possibly amplifying visibility for extensive literature review of recent technology—thanks! #ML #hardware #tinyML

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Shail Dave

Shail Dave Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(