ALARM: Audio-Language Alignment for Reasoning Models
AI 摘要
ALARM模型通过自复述和多音频编码器融合,提升了音频推理能力,并在多项基准测试中取得领先。
主要贡献
- 提出了自复述方法以适应推理LLM
- 融合压缩多个音频编码器以增强表示
- 构建了大规模多任务音频语言数据集
方法论
冻结LLM,训练适配器,通过自复述转换生成响应,融合多个音频编码器,并在大规模数据集上进行多任务训练。
原文摘要
Large audio language models (ALMs) extend LLMs with auditory understanding. A common approach freezes the LLM and trains only an adapter on self-generated targets. However, this fails for reasoning LLMs (RLMs) whose built-in chain-of-thought traces expose the textual surrogate input, yielding unnatural responses. We propose self-rephrasing, converting self-generated responses into audio-understanding variants compatible with RLMs while preserving distributional alignment. We further fuse and compress multiple audio encoders for stronger representations. For training, we construct a 6M-instance multi-task corpus (2.5M unique prompts) spanning 19K hours of speech, music, and sound. Our 4B-parameter ALM outperforms similarly sized models and surpasses most larger ALMs on related audio-reasoning benchmarks, while preserving textual capabilities with a low training cost. Notably, we achieve the best open-source result on the MMAU-speech and MMSU benchmarks and rank third among all the models.