Compiler-First State Space Duality and Portable $O(1)$ Autoregressive Caching for Inference
AI 摘要
利用XLA优化Mamba-2,实现跨平台O(1)状态缓存和高效推理,无需手写CUDA/Triton内核。
主要贡献
- 展示了Mamba-2的状态空间对偶性与XLA优化器的契合性,避免了自定义内核。
- 实现了完整的XLA推理路径,包括预填充和缓存自回归解码,且无需主机同步。
- 在CPU、NVIDIA GPU和TPU上实现了单一JAX源代码的跨平台运行。
方法论
通过XLA的融合和分块传递优化Mamba-2,利用其状态空间对偶性实现高效的片上缓存。
原文摘要
State-space model releases are typically coupled to fused CUDA and Triton kernels, inheriting a hard dependency on NVIDIA hardware. We show that Mamba-2's state space duality algorithm -- diagonal state structure, chunkable recurrence, and einsum-dominated compute with static control flow -- maps cleanly onto what XLA's fusion and tiling passes actually optimise, making custom kernels optional rather than required. We implement the full inference path (prefill, cached autoregressive decoding) as shaped standard primitives under XLA, without hand-written kernels, and realise the architecture's theoretical $O(1)$ state management as a compiled on-device cache requiring no host synchronisation during generation. The implementation runs unmodified on CPU, NVIDIA GPU, and Google Cloud TPU from a single JAX source. On TPU v6e across five model scales (130M--2.7B parameters), XLA-generated code reaches approximately 140 TFLOPS on single-stream prefill ($15%$ MFU) and up to $64%$ bandwidth utilisation on decode. Greedy decoding matches the PyTorch/CUDA reference token-for-token across 64 steps, with hidden-state agreement within float32 rounding tolerance. The pattern transfers to any SSM recurrence satisfying the same structural conditions, on any platform with a mature XLA backend. The implementation is publicly available at https://github.com/CosmoNaught/mamba2-jax and merged into the Bonsai JAX model library.