Boosting ASR Robustness via Test-Time Reinforcement Learning with Audio-Text Semantic Rewards
AI 摘要
提出ASR-TRA,一种基于强化学习的测试时自适应框架,提升ASR在噪声和口音环境下的鲁棒性。
主要贡献
- 提出了一种基于强化学习的测试时自适应框架ASR-TRA
- 利用音频-文本语义对齐作为奖励信号
- 使用可学习的解码器提示和温度控制的随机解码
方法论
使用可学习的prompt生成多样转录候选,利用奖励模型评估语义对齐度,通过强化学习更新模型和prompt参数。
原文摘要
Recently, Automatic Speech Recognition (ASR) systems (e.g., Whisper) have achieved remarkable accuracy improvements but remain highly sensitive to real-world unseen data (data with large distribution shifts), including noisy environments and diverse accents. To address this issue, test-time adaptation (TTA) has shown great potential in improving the model adaptability at inference time without ground-truth labels, and existing TTA methods often rely on pseudo-labeling or entropy minimization. However, by treating model confidence as a learning signal, these methods may reinforce high-confidence errors, leading to confirmation bias that undermines adaptation. To overcome these limitations, we present ASR-TRA, a novel Test-time Reinforcement Adaptation framework inspired by causal intervention. More precisely, our method introduces a learnable decoder prompt and utilizes temperature-controlled stochastic decoding to generate diverse transcription candidates. These are scored by a reward model that measures audio-text semantic alignment, and the resulting feedback is used to update both model and prompt parameters via reinforcement learning. Comprehensive experiments on LibriSpeech with synthetic noise and L2 Arctic accented English datasets demonstrate that our method achieves higher accuracy while maintaining lower latency than existing TTA baselines. Ablation studies further confirm the effectiveness of combining audio and language-based rewards, highlighting our method's enhanced stability and interpretability. Overall, our approach provides a practical and robust solution for deploying ASR systems in challenging real-world conditions.