Huiqiang Jiang Profile picture
Jul 7, 2024 7 tweets 3 min read Read on X
Thanks @_akhaliq for sponsoring. You can now use MInference online in HF Demo with ZeroGPU.
Now, you can process 1M context 10x faster in a single A100 using Long-context LLMs like LLaMA3-1M, GLM4-1M, with even better acc, try MInference 1.0 right now! huggingface.co/spaces/microso…
1) The motivation behind MInference is long-context inference is highly resource-intensive, yet inherently sparse and dynamic. We use dynamic sparse attention to accelerate the pre-filling of 1M inference by up to 10x. For more details, visit project page: aka.ms/MInference
Image
2) We categorize attention into 3 patterns: A-shape, Vertical-Slash, Block-Sparse. Through offline search, determine optimal sparse pattern for each head with 1 example. Then approximate dynamic sparse indices online and use optimal GPU kernels for dynamic sparse calculations. Image
3) We evaluated MInference on a wide range of SoTA long-text benchmarks, from 128K to 1M, including RULER, InfiniteBench, Needle Test, and PG-19, on SoTA open-source long-context LLMs, including LLaMA-3-1M, GLM-4-1M, Yi-200K, Phi-3-128K, and Qwen2-128K. with same performance. Image
4) MInference uses dynamic compiler PIT, Triton, and Flash-Attention to effectively speed up long-context LLMs inference with minimal overhead, achieving speedups of 1.8x, 4.1x, 6.8x, and 10x at 100K, 300K, 500K, and 1M tokens, respectively. Image
5) The discovered sparse patterns are not just due to LLMs using RoPE but are also found in BERT Auto-Encoder, T5 Encoder-Decoder, and VLMs. We believe these sparse patterns act as information transmission channels, learned through world knowledge. Image
6) Now you can try to use MInference to accelerate your long-context LLMs inference!

Code: github.com/microsoft/MInf…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Huiqiang Jiang

Huiqiang Jiang Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(