Few-shot Acoustic Synthesis with Multimodal Flow Matching
AI 摘要
提出FLAC,一种基于流匹配的概率方法,用于少样本声学合成,生成与场景一致的RIR。
主要贡献
- 提出FLAC,一种新的声学合成方法
- 引入AGREE,一种新的几何一致性评估指标
- 首次将生成流匹配应用于RIR合成
方法论
利用Diffusion Transformer,采用流匹配目标函数训练,根据空间、几何和声学线索生成RIR。
原文摘要
Generating audio that is acoustically consistent with a scene is essential for immersive virtual environments. Recent neural acoustic field methods enable spatially continuous sound rendering but remain scene-specific, requiring dense audio measurements and costly training for each environment. Few-shot approaches improve scalability across rooms but still rely on multiple recordings and, being deterministic, fail to capture the inherent uncertainty of scene acoustics under sparse context. We introduce flow-matching acoustic generation (FLAC), a probabilistic method for few-shot acoustic synthesis that models the distribution of plausible room impulse responses (RIRs) given minimal scene context. FLAC leverages a diffusion transformer trained with a flow-matching objective to generate RIRs at arbitrary positions in novel scenes, conditioned on spatial, geometric, and acoustic cues. FLAC outperforms state-of-the-art eight-shot baselines with one-shot on both the AcousticRooms and Hearing Anything Anywhere datasets. To complement standard perceptual metrics, we further introduce AGREE, a joint acoustic-geometry embedding, enabling geometry-consistent evaluation of generated RIRs through retrieval and distributional metrics. This work is the first to apply generative flow matching to explicit RIR synthesis, establishing a new direction for robust and data-efficient acoustic synthesis.