Multimodal Learning 相关度: 9/10

Cross-Modal Rationale Transfer for Explainable Humanitarian Classification on Social Media

Thi Huyen Nguyen, Koustav Rudra, Wolfgang Nejdl
arXiv: 2603.18611v1 发布: 2026-03-19 更新: 2026-03-19

AI 摘要

提出一种跨模态的、可解释的人道主义分类框架,提升了社会媒体危机事件分类的准确性和可解释性。

主要贡献

  • 提出跨模态理由转移方法,从文本理由推导出图像理由。
  • 提出可解释的多模态分类框架,提高分类透明度。
  • 在CrisisMMD数据集上验证了方法的有效性,并具有零样本泛化能力。

方法论

使用视觉语言transformer学习文本和图像的联合表示,提取文本理由,然后将文本理由映射到图像,从而提取图像理由,最后基于理由进行分类。

原文摘要

Advances in social media data dissemination enable the provision of real-time information during a crisis. The information comes from different classes, such as infrastructure damages, persons missing or stranded in the affected zone, etc. Existing methods attempted to classify text and images into various humanitarian categories, but their decision-making process remains largely opaque, which affects their deployment in real-life applications. Recent work has sought to improve transparency by extracting textual rationales from tweets to explain predicted classes. However, such explainable classification methods have mostly focused on text, rather than crisis-related images. In this paper, we propose an interpretable-by-design multimodal classification framework. Our method first learns the joint representation of text and image using a visual language transformer model and extracts text rationales. Next, it extracts the image rationales via the mapping with text rationales. Our approach demonstrates how to learn rationales in one modality from another through cross-modal rationale transfer, which saves annotation effort. Finally, tweets are classified based on extracted rationales. Experiments are conducted over CrisisMMD benchmark dataset, and results show that our proposed method boosts the classification Macro-F1 by 2-35% while extracting accurate text tokens and image patches as rationales. Human evaluation also supports the claim that our proposed method is able to retrieve better image rationale patches (12%) that help to identify humanitarian classes. Our method adapts well to new, unseen datasets in zero-shot mode, achieving an accuracy of 80%.

标签

多模态学习 可解释性 危机事件分类 跨模态理由转移

arXiv 分类

cs.CL cs.CV