Multimodal Learning 相关度: 9/10

EgoActor: Grounding Task Planning into Spatial-aware Egocentric Actions for Humanoid Robots via Visual-Language Models

Yu Bai, MingMing Yu, Chaojie Li, Ziyi Bai, Xinlong Wang, Börje F. Karlsson
arXiv: 2602.04515v1 发布: 2026-02-04 更新: 2026-02-04

AI 摘要

EgoActor通过VLM将高层指令转化为机器人具体的空间感知行为。

主要贡献

  • 提出了EgoActing任务,将任务规划与机器人行为相结合
  • 提出了EgoActor模型,一个统一且可扩展的视觉-语言模型
  • 通过多源数据训练,实现了真实场景下的鲁棒性和泛化性

方法论

利用来自真实世界和仿真环境的视觉-语言数据,训练统一的VLM,实现空间感知的机器人动作推理。

原文摘要

Deploying humanoid robots in real-world settings is fundamentally challenging, as it demands tight integration of perception, locomotion, and manipulation under partial-information observations and dynamically changing environments. As well as transitioning robustly between sub-tasks of different types. Towards addressing these challenges, we propose a novel task - EgoActing, which requires directly grounding high-level instructions into various, precise, spatially aware humanoid actions. We further instantiate this task by introducing EgoActor, a unified and scalable vision-language model (VLM) that can predict locomotion primitives (e.g., walk, turn, move sideways, change height), head movements, manipulation commands, and human-robot interactions to coordinate perception and execution in real-time. We leverage broad supervision over egocentric RGB-only data from real-world demonstrations, spatial reasoning question-answering, and simulated environment demonstrations, enabling EgoActor to make robust, context-aware decisions and perform fluent action inference (under 1s) with both 8B and 4B parameter models. Extensive evaluations in both simulated and real-world environments demonstrate that EgoActor effectively bridges abstract task planning and concrete motor execution, while generalizing across diverse tasks and unseen environments.

标签

机器人 视觉语言模型 具身智能 任务规划 行为规划

arXiv 分类

cs.RO cs.CV