AI Agents 相关度: 8/10

SAIL: Test-Time Scaling for In-Context Imitation Learning with VLM

Makoto Sato, Yusuke Iwasawa, Yujin Tang, So Kuroki
arXiv: 2603.08269v1 发布: 2026-03-09 更新: 2026-03-09

AI 摘要

SAIL提出了一种基于VLM的上下文模仿学习框架,通过迭代优化轨迹提升机器人技能。

主要贡献

  • 提出SAIL框架,利用测试时计算扩展模仿学习能力
  • 利用VLM进行轨迹评估和迭代优化
  • 在多个机器人操作任务上验证了框架的有效性

方法论

利用蒙特卡洛树搜索迭代优化轨迹,结合上下文检索、VLM打分和步进反馈,提高机器人模仿学习性能。

原文摘要

In-context imitation learning allows robots to acquire skills from demonstrations, yet one-shot trajectory generation remains fragile under environmental variation. We propose SAIL, a framework that reframes robot imitation as an iterative refinement problem capable of scaling with test-time compute. SAIL utilizes Monte Carlo Tree Search, where each node is a complete trajectory and edges correspond to trajectory refinements. The process is guided by three core components: an automated archive of successful trajectories for contextually relevant retrieval, a vision language model-based scoring mechanism for trajectory evaluation, and a step-level feedback that provides trajectory-aligned scores for iterative refinement. Experiments across six diverse manipulation tasks in simulation and real-world validation clearly demonstrate that increasing test-time compute consistently improves success rates, achieving up to 95% on complex tasks. Our results suggest that trajectory-level test-time scaling is a robust path toward more generalizable robotic agents.

标签

机器人 模仿学习 VLM 蒙特卡洛树搜索 迭代优化

arXiv 分类

cs.RO cs.AI