Multimodal Learning 相关度: 9/10

DriveFine: Refining-Augmented Masked Diffusion VLA for Precise and Robust Driving

Chenxu Dang, Sining Ang, Yongkang Li, Haochen Tian, Jie Wang, Guang Li, Hangjun Ye, Jie Ma, Long Chen, Yan Wang
arXiv: 2602.14577v1 发布: 2026-02-16 更新: 2026-02-16

AI 摘要

DriveFine通过混合扩散VLA模型,结合生成与精炼专家,提升自动驾驶决策的精确性和鲁棒性。

主要贡献

  • 提出了一种masked diffusion VLA模型DriveFine
  • 设计了可插拔的block-MoE结构,实现生成与精炼专家解耦
  • 设计了混合强化学习策略,有效探索精炼专家

方法论

通过masked diffusion VLA,引入block-MoE结构的精炼专家,并使用混合强化学习策略进行训练。

原文摘要

Vision-Language-Action (VLA) models for autonomous driving increasingly adopt generative planners trained with imitation learning followed by reinforcement learning. Diffusion-based planners suffer from modality alignment difficulties, low training efficiency, and limited generalization. Token-based planners are plagued by cumulative causal errors and irreversible decoding. In summary, the two dominant paradigms exhibit complementary strengths and weaknesses. In this paper, we propose DriveFine, a masked diffusion VLA model that combines flexible decoding with self-correction capabilities. In particular, we design a novel plug-and-play block-MoE, which seamlessly injects a refinement expert on top of the generation expert. By enabling explicit expert selection during inference and gradient blocking during training, the two experts are fully decoupled, preserving the foundational capabilities and generic patterns of the pretrained weights, which highlights the flexibility and extensibility of the block-MoE design. Furthermore, we design a hybrid reinforcement learning strategy that encourages effective exploration of refinement expert while maintaining training stability. Extensive experiments on NAVSIM v1, v2, and Navhard benchmarks demonstrate that DriveFine exhibits strong efficacy and robustness. The code will be released at https://github.com/MSunDYY/DriveFine.

标签

自动驾驶 VLA模型 扩散模型 强化学习

arXiv 分类

cs.CV