TAG: Target-Agnostic Guidance for Stable Object-Centric Inference in Vision-Language-Action Models
AI 摘要
提出TAG,通过目标无关的指导来提升VLA模型在复杂场景下的目标定位准确性和鲁棒性。
主要贡献
- 提出了TAG: 一种推理时的指导机制,用于减少VLA策略中的干扰和外观偏差。
- TAG不修改策略架构,易于集成到现有VLA策略中。
- 在LIBERO、LIBERO-Plus和VLABench等基准测试中,TAG显著提高了鲁棒性。
方法论
TAG借鉴classifier-free guidance,通过对比原始观测和目标擦除观测下的策略预测,利用差异作为残差引导信号。
原文摘要
Vision--Language--Action (VLA) policies have shown strong progress in mapping language instructions and visual observations to robotic actions, yet their reliability degrades in cluttered scenes with distractors. By analyzing failure cases, we find that many errors do not arise from infeasible motions, but from instance-level grounding failures: the policy often produces a plausible grasp trajectory that lands slightly off-target or even on the wrong object instance. To address this issue, we propose TAG (Target-Agnostic Guidance), a simple inference-time guidance mechanism that explicitly reduces distractor- and appearance-induced bias in VLA policies. Inspired by classifier-free guidance (CFG), TAG contrasts policy predictions under the original observation and an object-erased observation, and uses their difference as a residual steering signal that strengthens the influence of object evidence in the decision process. TAG does not require modifying the policy architecture and can be integrated with existing VLA policies with minimal training and inference changes. We evaluate TAG on standard manipulation benchmarks, including LIBERO, LIBERO-Plus, and VLABench, where it consistently improves robustness under clutter and reduces near-miss and wrong-object executions.