Causal Scene Narration with Runtime Safety Supervision for Vision-Language-Action Driving
AI 摘要
提出Causal Scene Narration方法,通过因果场景叙述和运行时安全监督,提升自动驾驶VLA模型的性能。
主要贡献
- 提出Causal Scene Narration (CSN)方法
- 结合Simplex-based运行时安全监督
- 引入Plackett-Luce DPO训练对齐
方法论
通过意图-约束对齐、定量grounding和结构化分离重构VLA文本输入,结合安全监督和训练对齐提升模型性能。
原文摘要
Vision-Language-Action (VLA) models for autonomous driving must integrate diverse textual inputs, including navigation commands, hazard warnings, and traffic state descriptions, yet current systems often present these as disconnected fragments, forcing the model to discover on its own which environmental constraints are relevant to the current maneuver. We introduce Causal Scene Narration (CSN), which restructures VLA text inputs through intent-constraint alignment, quantitative grounding, and structured separation, at inference time with zero GPU cost. We complement CSN with Simplex-based runtime safety supervision and training-time alignment via Plackett-Luce DPO with negative log-likelihood (NLL) regularization. A multi-town closed-loop CARLA evaluation shows that CSN improves Driving Score by +31.1% on original LMDrive and +24.5% on the preference-aligned variant. A controlled ablation reveals that causal structure accounts for 39.1% of this gain, with the remainder attributable to information content alone. A perception noise ablation confirms that CSN's benefit is robust to realistic sensing errors. Semantic safety supervision improves Infraction Score, while reactive Time-To-Collision monitoring degrades performance, demonstrating that intent-aware monitoring is needed for VLA systems.