Multimodal Learning 相关度: 9/10

UAV-Track VLA: Embodied Aerial Tracking via Vision-Language-Action Models

Qiyao Zhang, Shuhua Zheng, Jianli Sun, Chengxiang Li, Xianke Wu, Zihan Song, Zhiyong Cui, Yisheng Lv, Yonglin Tian
arXiv: 2604.02241v1 发布: 2026-04-02 更新: 2026-04-02

AI 摘要

提出UAV-Track VLA模型,用于提升无人机在复杂场景下的视觉-语言-动作跟踪能力。

主要贡献

  • 构建大规模无人机视觉-语言-动作跟踪数据集和评估基准
  • 提出UAV-Track VLA模型,引入时间压缩网络和空间感知双分支解码器
  • 验证模型在长距离行人跟踪和零样本泛化方面的优越性能

方法论

基于π_{0.5}架构,利用时间压缩网络捕获帧间动态,通过双分支解码器解耦跨模态特征并生成精细动作。

原文摘要

Embodied visual tracking is crucial for Unmanned Aerial Vehicles (UAVs) executing complex real-world tasks. In dynamic urban scenarios with complex semantic requirements, Vision-Language-Action (VLA) models show great promise due to their cross-modal fusion and continuous action generation capabilities. To benchmark multimodal tracking in such environments, we construct a dedicated evaluation benchmark and a large-scale dataset encompassing over 890K frames, 176 tasks, and 85 diverse objects. Furthermore, to address temporal feature redundancy and the lack of spatial geometric priors in existing VLA models, we propose an improved VLA tracking model, UAV-Track VLA. Built upon the $π_{0.5}$ architecture, our model introduces a temporal compression net to efficiently capture inter-frame dynamics. Additionally, a parallel dual-branch decoder comprising a spatial-aware auxiliary grounding head and a flow matching action expert is designed to decouple cross-modal features and generate fine-grained continuous actions. Systematic experiments in the CARLA simulator validate the superior end-to-end performance of our method. Notably, in challenging long-distance pedestrian tracking tasks, UAV-Track VLA achieves a 61.76\% success rate and 269.65 average tracking frames, significantly outperforming existing baselines. Furthermore, it demonstrates robust zero-shot generalization in unseen environments and reduces single-step inference latency by 33.4\% (to 0.0571s) compared to the original $π_{0.5}$, enabling highly efficient, real-time UAV control. Data samples and demonstration videos are available at: https://github.com/Hub-Tian/UAV-Track\_VLA.

标签

无人机 视觉-语言-动作 目标跟踪 多模态学习 强化学习

arXiv 分类

cs.CV cs.RO