AI Agents 相关度: 8/10

Back to Basics: Revisiting ASR in the Age of Voice Agents

Geeyang Tay, Wentao Ma, Jaewon Lee, Yuzhi Tang, Daniel Lee, Weisu Yin, Dongming Shen, Silin Meng, Yi Zhu, Mu Li, Alex Smola
arXiv: 2603.25727v1 发布: 2026-03-26 更新: 2026-03-26

AI 摘要

该论文提出了WildASR多语言诊断基准,揭示了现有ASR系统在真实场景下的鲁棒性问题,并提供了分析工具。

主要贡献

  • 提出了WildASR多语言诊断基准,评估ASR鲁棒性
  • 揭示了现有ASR系统在真实场景下的性能退化
  • 提供了分析工具,帮助开发者改进ASR可靠性

方法论

构建包含环境退化、人口结构变化和语言多样性因素的多语言数据集,评估现有ASR系统,并分析其性能差异。

原文摘要

Automatic speech recognition (ASR) systems have achieved near-human accuracy on curated benchmarks, yet still fail in real-world voice agents under conditions that current evaluations do not systematically cover. Without diagnostic tools that isolate specific failure factors, practitioners cannot anticipate which conditions, in which languages, will cause what degree of degradation. We introduce WildASR, a multilingual (four-language) diagnostic benchmark sourced entirely from real human speech that factorizes ASR robustness along three axes: environmental degradation, demographic shift, and linguistic diversity. Evaluating seven widely used ASR systems, we find severe and uneven performance degradation, and model robustness does not transfer across languages or conditions. Critically, models often hallucinate plausible but unspoken content under partial or degraded inputs, creating concrete safety risks for downstream agent behavior. Our results demonstrate that targeted, factor-isolated evaluation is essential for understanding and improving ASR reliability in production systems. Besides the benchmark itself, we also present three analytical tools that practitioners can use to guide deployment decisions.

标签

ASR 语音识别 鲁棒性 多语言 基准测试

arXiv 分类

cs.AI cs.MM