CoFL: Continuous Flow Fields for Language-Conditioned Navigation
AI 摘要
CoFL通过预测连续流场实现语言条件导航,无需离散动作预测,并在真实场景中实现了zero-shot部署。
主要贡献
- 提出了一种端到端的语言条件导航策略CoFL
- 设计了一种基于程序化标注的大规模BEV图像-指令数据集
- 在真实场景中实现了CoFL的zero-shot部署
方法论
CoFL直接将BEV图像和语言指令映射为连续流场,通过数值积分得到平滑轨迹,并使用程序化标注的数据集进行训练。
原文摘要
Language-conditioned navigation pipelines often rely on brittle modular components or costly action-sequence generation. To address these limitations, we present CoFL, an end-to-end policy that directly maps a bird's-eye view (BEV) observation and a language instruction to a continuous flow field for navigation. Instead of predicting discrete action tokens or sampling action chunks via iterative denoising, CoFL outputs instantaneous velocities that can be queried at arbitrary 2D projected locations. Trajectories are obtained by numerical integration of the predicted field, producing smooth motion that remains reactive under closed-loop execution. To enable large-scale training, we build a dataset of over 500k BEV image-instruction pairs, each procedurally annotated with a flow field and a trajectory derived from BEV semantic maps built on Matterport3D and ScanNet. By training on a mixed distribution, CoFL significantly outperforms modular Vision-Language Model (VLM)-based planners and generative policy baselines on strictly unseen scenes. Finally, we deploy CoFL zero-shot in real-world experiments with overhead BEV observations across multiple layouts, maintaining reliable closed-loop control and a high success rate.