Moonshine v2: Ergodic Streaming Encoder ASR for Latency-Critical Speech Applications
AI 摘要
Moonshine v2提出了一种低延迟、高精度的流式语音识别模型,适用于资源受限的边缘设备。
主要贡献
- 提出一种基于滑动窗口自注意力的流式encoder ASR模型
- 在标准数据集上达到state-of-the-art的词错误率
- 模型大小和延迟成本显著降低,性能媲美大型模型
方法论
采用滑动窗口自注意力机制,在保证局部上下文信息的同时,实现低延迟的流式语音识别。
原文摘要
Latency-critical speech applications (e.g., live transcription, voice commands, and real-time translation) demand low time-to-first-token (TTFT) and high transcription accuracy, particularly on resource-constrained edge devices. Full-attention Transformer encoders remain a strong accuracy baseline for automatic speech recognition (ASR) because every frame can directly attend to every other frame, which resolves otherwise locally ambiguous acoustics using distant lexical context. However, this global dependency incurs quadratic complexity in sequence length, inducing an inherent "encode-the-whole-utterance" latency profile. For streaming use cases, this causes TTFT to grow linearly with utterance length as the encoder must process the entire prefix before any decoder token can be emitted. To better meet the needs of on-device, streaming ASR use cases we introduce Moonshine v2, an ergodic streaming-encoder ASR model that employs sliding-window self-attention to achieve bounded, low-latency inference while preserving strong local context. Our models achieve state of the art word error rates across standard benchmarks, attaining accuracy on-par with models 6x their size while running significantly faster. These results demonstrate that carefully designed local attention is competitive with the accuracy of full attention at a fraction of the size and latency cost, opening new possibilities for interactive speech interfaces on edge devices.