AnoleVLA: Lightweight Vision-Language-Action Model with Deep State Space Models for Mobile Manipulation
AI 摘要
AnoleVLA是一种轻量级视觉-语言-动作模型,利用深度状态空间模型高效处理多模态序列,提升移动机器人的操作性能。
主要贡献
- 提出了轻量级VLA模型AnoleVLA
- 使用深度状态空间模型处理视觉和文本输入
- 在真实环境中超越了大型VLA模型
方法论
采用深度状态空间模型处理视觉和文本序列,生成机器人轨迹,并在仿真和真实环境中进行了实验验证。
原文摘要
In this study, we address the problem of language-guided robotic manipulation, where a robot is required to manipulate a wide range of objects based on visual observations and natural language instructions. This task is essential for service robots that operate in human environments, and requires safety, efficiency, and task-level generality. Although Vision-Language-Action models (VLAs) have demonstrated strong performance for this task, their deployment in resource-constrained environments remains challenging because of the computational cost of standard transformer backbones. To overcome this limitation, we propose AnoleVLA, a lightweight VLA that uses a deep state space model to process multimodal sequences efficiently. The model leverages its lightweight and fast sequential state modeling to process visual and textual inputs, which allows the robot to generate trajectories efficiently. We evaluated the proposed method in both simulation and physical experiments. Notably, in real-world evaluations, AnoleVLA outperformed a representative large-scale VLA by 21 points for the task success rate while achieving an inference speed approximately three times faster.