World Action Verifier: Self-Improving World Models via Forward-Inverse Asymmetry
AI 摘要
提出World Action Verifier (WAV)框架,通过前向-逆向不对称性实现世界模型的自改进。
主要贡献
- 提出基于状态合理性和动作可达性的世界模型验证方法
- 利用视频语料库生成多样化的子目标,利用稀疏逆模型推断动作
- 通过循环一致性验证,提升了世界模型在未探索领域的性能
方法论
WAV通过视频数据生成子目标,利用逆模型推断动作,并利用循环一致性验证来提高世界模型预测的准确性。
原文摘要
General-purpose world models promise scalable policy evaluation, optimization, and planning, yet achieving the required level of robustness remains challenging. Unlike policy learning, which primarily focuses on optimal actions, a world model must be reliable over a much broader range of suboptimal actions, which are often insufficiently covered by action-labeled interaction data. To address this challenge, we propose World Action Verifier (WAV), a framework that enables world models to identify their own prediction errors and self-improve. The key idea is to decompose action-conditioned state prediction into two factors -- state plausibility and action reachability -- and verify each separately. We show that these verification problems can be substantially easier than predicting future states due to two underlying asymmetries: the broader availability of action-free data and the lower dimensionality of action-relevant features. Leveraging these asymmetries, we augment a world model with (i) a diverse subgoal generator obtained from video corpora and (ii) a sparse inverse model that infers actions from a subset of state features. By enforcing cycle consistency among generated subgoals, inferred actions, and forward rollouts, WAV provides an effective verification mechanism in under-explored regimes, where existing methods typically fail. Across nine tasks spanning MiniGrid, RoboMimic, and ManiSkill, our method achieves 2x higher sample efficiency while improving downstream policy performance by 18%.