SceneTeract: Agentic Functional Affordances and VLM Grounding in 3D Scenes
AI 摘要
SceneTeract验证3D场景功能性,揭示VLM在物理可行性推理上的不足,并用于VLM的后训练。
主要贡献
- 提出了SceneTeract框架,用于验证3D场景的功能性
- 发现了合成室内环境中常见的功能性缺陷
- 揭示了VLM在预测功能性可供性方面的不足
- 利用SceneTeract作为奖励引擎进行VLM后训练
方法论
分解复杂活动为原子动作序列,通过物理和几何模拟验证每个步骤的访问性要求。
原文摘要
Embodied AI depends on interactive 3D environments that support meaningful activities for diverse users, yet assessing their functional affordances remains a core challenge. We introduce SceneTeract, a framework that verifies 3D scene functionality under agent-specific constraints. Our core contribution is a grounded verification engine that couples high-level semantic reasoning with low-level geometric checks. SceneTeract decomposes complex activities into sequences of atomic actions and validates each step against accessibility requirements (e.g., reachability, clearance, and navigability) conditioned on an embodied agent profile, using explicit physical and geometric simulations. We deploy SceneTeract to perform an in-depth evaluation of (i) synthetic indoor environments, uncovering frequent functional failures that prevent basic interactions, and (ii) the ability of frontier Vision-Language Models (VLMs) to reason about and predict functional affordances, revealing systematic mismatches between semantic confidence and physical feasibility even for the strongest current models. Finally, we leverage SceneTeract as a reward engine for VLM post-training, enabling scalable distillation of geometric constraints into reasoning models. We release the SceneTeract verification suite and data to bridge perception and physical reality in embodied 3D scene understanding.