Multimodal Learning 相关度: 7/10

Just Zoom In: Cross-View Geo-Localization via Autoregressive Zooming

Yunus Talha Erzurumlu, Jiyong Kwag, Alper Yilmaz
arXiv: 2603.25686v1 发布: 2026-03-26 更新: 2026-03-26

AI 摘要

提出一种基于自回归缩放的跨视角地理定位方法,无需对比学习,性能优于传统方法。

主要贡献

  • 提出自回归缩放方法进行跨视角地理定位
  • 提出新的更真实的跨视角地理定位基准
  • 实验证明新方法优于对比学习方法

方法论

使用自回归模型,通过逐步放大卫星图像,进行由粗到细的空间推理,从而实现跨视角地理定位。

原文摘要

Cross-view geo-localization (CVGL) estimates a camera's location by matching a street-view image to geo-referenced overhead imagery, enabling GPS-denied localization and navigation. Existing methods almost universally formulate CVGL as an image-retrieval problem in a contrastively trained embedding space. This ties performance to large batches and hard negative mining, and it ignores both the geometric structure of maps and the coverage mismatch between street-view and overhead imagery. In particular, salient landmarks visible from the street view can fall outside a fixed satellite crop, making retrieval targets ambiguous and limiting explicit spatial inference over the map. We propose Just Zoom In, an alternative formulation that performs CVGL via autoregressive zooming over a city-scale overhead map. Starting from a coarse satellite view, the model takes a short sequence of zoom-in decisions to select a terminal satellite cell at a target resolution, without contrastive losses or hard negative mining. We further introduce a realistic benchmark with crowd-sourced street views and high-resolution satellite imagery that reflects real capture conditions. On this benchmark, Just Zoom In achieves state-of-the-art performance, improving Recall@1 within 50 m by 5.5% and Recall@1 within 100 m by 9.6% over the strongest contrastive-retrieval baseline. These results demonstrate the effectiveness of sequential coarse-to-fine spatial reasoning for cross-view geo-localization.

标签

跨视角地理定位 自回归模型 空间推理 图像检索

arXiv 分类

cs.CV cs.AI