Multimodal Learning 相关度: 9/10

InterDyad: Interactive Dyadic Speech-to-Video Generation by Querying Intermediate Visual Guidance

Dongwei Pan, Longwei Guo, Jiazhi Guan, Luying Huang, Yiding Li, Haojie Liu, Haocheng Feng, Wei He, Kaisiyuan Wang, Hang Zhou
arXiv: 2603.23132v1 发布: 2026-03-24 更新: 2026-03-24

AI 摘要

InterDyad通过中间视觉引导,实现更自然、可控的双人交互视频生成。

主要贡献

  • 提出InterDyad框架,实现基于结构化运动引导的交互视频生成
  • 引入MetaQuery机制,对齐音频和运动先验
  • 使用MLLM从音频中提取语言意图,控制反应时机
  • 提出RoDG增强唇音同步和空间一致性

方法论

构建Interactivity Injector进行视频重演,通过MetaQuery对齐音频和运动,使用MLLM控制交互,RoDG优化唇音同步。

原文摘要

Despite progress in speech-to-video synthesis, existing methods often struggle to capture cross-individual dependencies and provide fine-grained control over reactive behaviors in dyadic settings. To address these challenges, we propose InterDyad, a framework that enables naturalistic interactive dynamics synthesis via querying structural motion guidance. Specifically, we first design an Interactivity Injector that achieves video reenactment based on identity-agnostic motion priors extracted from reference videos. Building upon this, we introduce a MetaQuery-based modality alignment mechanism to bridge the gap between conversational audio and these motion priors. By leveraging a Multimodal Large Language Model (MLLM), our framework is able to distill linguistic intent from audio to dictate the precise timing and appropriateness of reactions. To further improve lip-sync quality under extreme head poses, we propose Role-aware Dyadic Gaussian Guidance (RoDG) for enhanced lip-synchronization and spatial consistency. Finally, we introduce a dedicated evaluation suite with novelly designed metrics to quantify dyadic interaction. Comprehensive experiments demonstrate that InterDyad significantly outperforms state-of-the-art methods in producing natural and contextually grounded two-person interactions. Please refer to our project page for demo videos: https://interdyad.github.io/.

标签

speech-to-video dyadic interaction multimodal learning MLLM video generation

arXiv 分类

cs.CV