EXO Labs Profile picture
AI on any device. 12 Days of EXO: https://t.co/VMrJ6Vi4h3 We're hiring: https://t.co/TLeiHvOYUX
Oct 15 5 tweets 2 min read
Clustering NVIDIA DGX Spark + M3 Ultra Mac Studio for 4x faster LLM inference.

DGX Spark: 128GB @ 273GB/s, 100 TFLOPS (fp16), $3,999
M3 Ultra: 256GB @ 819GB/s, 26 TFLOPS (fp16), $5,599

The DGX Spark has 3x less memory bandwidth than the M3 Ultra but 4x more FLOPS.

By running compute-bound prefill on the DGX Spark, memory-bound decode on the M3 Ultra, and streaming the KV cache over 10GbE, we are able to get the best of both hardware with massive speedups.

Short explanation in this thread & link to full blog post below.Image LLM inference consists of a prefill and decode stage.

Prefill processes the prompt, building a KV cache. It’s compute-bound so gets faster with more FLOPS.

Decode reads the KV cache and generates tokens one by one. It’s memory-bound so gets faster with more memory bandwidth.