Multimodal Learning 相关度: 9/10

DIAL: Decoupling Intent and Action via Latent World Modeling for End-to-End VLA

Yi Chen, Yuying Ge, Hui Zhou, Mingyu Ding, Yixiao Ge, Xihui Liu
arXiv: 2603.29844v1 发布: 2026-03-31 更新: 2026-03-31

AI 摘要

DIAL通过解耦意图和动作,利用潜在世界建模,提升VLA模型性能并减少数据依赖。

主要贡献

  • 提出DIAL框架,解耦高层决策和底层动作。
  • 利用VLM进行潜在世界建模,显式编码意图。
  • 两阶段训练策略,保证优化稳定并保留预训练知识。

方法论

DIAL使用VLM进行潜在视觉预测,并通过轻量级策略解码意图并控制机器人,采用两阶段训练。

原文摘要

The development of Vision-Language-Action (VLA) models has been significantly accelerated by pre-trained Vision-Language Models (VLMs). However, most existing end-to-end VLAs treat the VLM primarily as a multimodal encoder, directly mapping vision-language features to low-level actions. This paradigm underutilizes the VLM's potential in high-level decision making and introduces training instability, frequently degrading its rich semantic representations. To address these limitations, we introduce DIAL, a framework bridging high-level decision making and low-level motor execution through a differentiable latent intent bottleneck. Specifically, a VLM-based System-2 performs latent world modeling by synthesizing latent visual foresight within the VLM's native feature space; this foresight explicitly encodes intent and serves as the structural bottleneck. A lightweight System-1 policy then decodes this predicted intent together with the current observation into precise robot actions via latent inverse dynamics. To ensure optimization stability, we employ a two-stage training paradigm: a decoupled warmup phase where System-2 learns to predict latent futures while System-1 learns motor control under ground-truth future guidance within a unified feature space, followed by seamless end-to-end joint optimization. This enables action-aware gradients to refine the VLM backbone in a controlled manner, preserving pre-trained knowledge. Extensive experiments on the RoboCasa GR1 Tabletop benchmark show that DIAL establishes a new state-of-the-art, achieving superior performance with 10x fewer demonstrations than prior methods. Furthermore, by leveraging heterogeneous human demonstrations, DIAL learns physically grounded manipulation priors and exhibits robust zero-shot generalization to unseen objects and novel configurations during real-world deployment on a humanoid robot.

标签

VLA VLM 机器人控制 潜在世界建模

arXiv 分类

cs.RO cs.AI cs.CV cs.LG