The DGX Spark has 3x less memory bandwidth than the M3 Ultra but 4x more FLOPS.
By running compute-bound prefill on the DGX Spark, memory-bound decode on the M3 Ultra, and streaming the KV cache over 10GbE, we are able to get the best of both hardware with massive speedups.
Short explanation in this thread & link to full blog post below.
LLM inference consists of a prefill and decode stage.
Prefill processes the prompt, building a KV cache. It’s compute-bound so gets faster with more FLOPS.
Decode reads the KV cache and generates tokens one by one. It’s memory-bound so gets faster with more memory bandwidth.