Learning Multiple Utterance-Level Attribute Representations with a Unified Speech Encoder
AI 摘要
提出一种统一的后训练框架,使语音基础模型能够生成多种类型的语句级表示。
主要贡献
- 提出统一的后训练框架
- 学习多个语句级属性表示
- 在多语言语音检索和说话人识别任务上验证有效性
方法论
通过监督学习适配语音基础模型,使其学习语句级语义和说话人表示,从而实现多种属性的联合学习。
原文摘要
Speech foundation models trained with self-supervised learning produce generic speech representations that support a wide range of speech processing tasks. When further adapted with supervised learning, these models can achieve strong performance on specific downstream tasks. Recent post-training approaches, such as SAMU-XSLR and SONAR, align speech representations with utterance-level semantic representations, enabling effective multimodal (speech-text) and multilingual applications. While speech foundation models typically learn contextual embeddings at the acoustic frame level, these methods learn representations at the utterance level. In this work, we extend this paradigm to arbitrary utterance-level attributes and propose a unified post-training framework that enables a single speech foundation model to generate multiple types of utterance-level representations. We demonstrate the effectiveness of this approach by jointly learning semantic and speaker representations and evaluating them on multilingual speech retrieval and speaker recognition tasks.