Multimodal Learning 相关度: 9/10

Multi-Source Evidence Fusion for Audio Question Answering

Aivo Olev, Tanel Alumäe
arXiv: 2603.17822v1 发布: 2026-03-18 更新: 2026-03-18

AI 摘要

提出多源证据融合的音频问答系统,在Interspeech 2026挑战赛中获得第一,大幅领先。

主要贡献

  • 多源证据融合
  • 可靠性分层声学工具
  • 可验证的推理链

方法论

使用两个LALM生成观测,文本推理模型结合可靠性分层声学工具进行交叉验证。

原文摘要

Large audio language models (LALMs) can answer questions about speech, music, and environmental sounds, yet their internal reasoning is largely opaque and difficult to validate. We describe TalTech's solution to the Agent Track of the Interspeech 2026 Audio Reasoning Challenge, in which systems are evaluated on reasoning process quality, specifically the factual accuracy, logical soundness, and completeness of their reasoning chains. Our multi-source ensemble pipeline uses two LALMs that generate independent observations, while a separate text-only reasoning model cross-checks these against outputs from 25 acoustic tools organized into reliability tiers. By grounding every inference step in explicit, reliability-tagged evidence, the system produces dense, verifiable reasoning chains. Our system ranked first in the challenge, outperforming all competing systems by a wide margin in challenge's reasoning quality metric.

标签

音频问答 多模态 证据融合 推理

arXiv 分类

eess.AS cs.CL