Multimodal Learning 相关度: 9/10

Robust Remote Sensing Image-Text Retrieval with Noisy Correspondence

Qiya Song, Yiqiang Xie, Yuan Sun, Renwei Dian, Xudong Kang
arXiv: 2603.28134v1 发布: 2026-03-30 更新: 2026-03-30

AI 摘要

针对遥感图像-文本检索中噪声对应问题,提出鲁棒检索框架RRSITR,提升模型在噪声环境下的性能。

主要贡献

  • 提出鲁棒遥感图像-文本检索范式RRSITR
  • 设计自步学习策略应对噪声对应问题
  • 提出鲁棒Triplet Loss动态调整软边缘

方法论

采用自步学习策略,根据样本损失大小分为三类,动态调节训练序列和权重,并为噪声样本设计鲁棒Triplet Loss。

原文摘要

As a pivotal task that bridges remote visual and linguistic understanding, Remote Sensing Image-Text Retrieval (RSITR) has attracted considerable research interest in recent years. However, almost all RSITR methods implicitly assume that image-text pairs are matched perfectly. In practice, acquiring a large set of well-aligned data pairs is often prohibitively expensive or even infeasible. In addition, we also notice that the remote sensing datasets (e.g., RSITMD) truly contain some inaccurate or mismatched image text descriptions. Based on the above observations, we reveal an important but untouched problem in RSITR, i.e., Noisy Correspondence (NC). To overcome these challenges, we propose a novel Robust Remote Sensing Image-Text Retrieval (RRSITR) paradigm that designs a self-paced learning strategy to mimic human cognitive learning patterns, thereby learning from easy to hard from multi-modal data with NC. Specifically, we first divide all training sample pairs into three categories based on the loss magnitude of each pair, i.e., clean sample pairs, ambiguous sample pairs, and noisy sample pairs. Then, we respectively estimate the reliability of each training pair by assigning a weight to each pair based on the values of the loss. Further, we respectively design a new multi-modal self-paced function to dynamically regulate the training sequence and weights of the samples, thus establishing a progressive learning process. Finally, for noisy sample pairs, we present a robust triplet loss to dynamically adjust the soft margin based on semantic similarity, thereby enhancing the robustness against noise. Extensive experiments on three popular benchmark datasets demonstrate that the proposed RRSITR significantly outperforms the state-of-the-art methods, especially in high noise rates. The code is available at: https://github.com/MSFLabX/RRSITR

标签

遥感图像-文本检索 噪声对应 自步学习 Triplet Loss

arXiv 分类

cs.CV