Multimodal Learning 相关度: 9/10

Le MuMo JEPA: Multi-Modal Self-Supervised Representation Learning with Learnable Fusion Tokens

Ciem Cornelissen, Sam Leroux, Pieter Simoens
arXiv: 2603.24327v1 发布: 2026-03-25 更新: 2026-03-25

AI 摘要

Le MuMo JEPA提出了一种多模态自监督学习框架,利用融合tokens学习统一表征。

主要贡献

  • 提出Le MuMo JEPA框架,用于多模态自监督表征学习
  • 使用可学习的融合tokens作为模态间的信息瓶颈
  • 在驾驶场景下,通过RGB图像和LiDAR深度数据进行验证

方法论

通过学习融合tokens,将RGB图像和LiDAR深度等多种模态的信息融合到一个共享的Transformer中,实现跨模态表征学习。

原文摘要

Self-supervised learning has emerged as a powerful paradigm for learning visual representations without manual annotations, yet most methods still operate on a single modality and therefore miss the complementary structure available from heterogeneous sensors. We present Le MuMo JEPA, a self-supervised framework that learns unified representations from RGB images and aligned companion modalities. In our driving experiments, the second modality is camera-aligned LiDAR depth; we also evaluate RGB-thermal training and transfer on the Teledyne FLIR ADAS benchmark. Our approach extends LeJEPA to the multi-modal setting by learning fusion tokens that act as a latent bottleneck between modality-specific patch stems inside a shared transformer. Our default model employs a pruned fusion strategy: after an initial cross-modal attention layer, modality-specific tokens are dropped, forcing cross-modal information into the shared fusion-token grid as an efficient latent bottleneck before Sketched Isotropic Gaussian Regularization (SIGReg) is applied to the joint multimodal CLS embedding. On Waymo, Le MuMo JEPA gives the strongest performance-efficiency trade-off on downstream patch probes among the from-scratch multimodal baselines, improving CenterNet detection and dense depth while remaining competitive on segmentation. Under from-scratch training on nuScenes, Le MuMo JEPA remains the strongest model, and it also gives the best FLIR results, especially after Waymo-initialized fine-tuning. It also retains the best overall accuracy-efficiency balance in our study at substantially lower compute, memory, and estimated training time.

标签

多模态学习 自监督学习 表征学习

arXiv 分类

cs.CV