RAAP: Retrieval-Augmented Affordance Prediction with Cross-Image Action Alignment
AI 摘要
RAAP通过检索增强和对齐学习,提升机器人未知环境下的物体动作预测能力。
主要贡献
- 提出RAAP框架,结合检索和对齐学习预测动作
- 解耦静态接触定位和动态动作方向,实现更好的迁移
- 使用双重加权注意力机制融合多重参考信息
方法论
RAAP通过检索相似样本,利用密集对应关系转移接触点,并使用对齐模型预测动作方向。
原文摘要
Understanding object affordances is essential for enabling robots to perform purposeful and fine-grained interactions in diverse and unstructured environments. However, existing approaches either rely on retrieval, which is fragile due to sparsity and coverage gaps, or on large-scale models, which frequently mislocalize contact points and mispredict post-contact actions when applied to unseen categories, thereby hindering robust generalization. We introduce Retrieval-Augmented Affordance Prediction (RAAP), a framework that unifies affordance retrieval with alignment-based learning. By decoupling static contact localization and dynamic action direction, RAAP transfers contact points via dense correspondence and predicts action directions through a retrieval-augmented alignment model that consolidates multiple references with dual-weighted attention. Trained on compact subsets of DROID and HOI4D with as few as tens of samples per task, RAAP achieves consistent performance across unseen objects and categories, and enables zero-shot robotic manipulation in both simulation and the real world. Project website: https://github.com/SEU-VIPGroup/RAAP.