Multimodal Learning 相关度: 9/10

M-MiniGPT4: Multilingual VLLM Alignment via Translated Data

Seung Hun Han, Youssef Mohamed, Mohamed Elhoseiny
arXiv: 2603.29467v1 发布: 2026-03-31 更新: 2026-03-31

AI 摘要

M-MiniGPT4通过混合数据和多语言对齐训练,提升了多语言视觉语言理解能力,并在MMMU上取得了优秀表现。

主要贡献

  • 提出M-MiniGPT4多语言视觉大语言模型
  • 使用混合多语言数据提升VLU性能
  • 提出多语言对齐训练方法
  • 开源模型、代码和翻译数据集

方法论

结合原生多语言和翻译数据训练MiniGPT4架构,并使用平行文本语料库进行多语言对齐训练。

原文摘要

This paper presents a Multilingual Vision Large Language Model, named M-MiniGPT4. Our model exhibits strong vision-language understanding (VLU) capabilities across 11 languages. We utilize a mixture of native multilingual and translated data to push the multilingual VLU performance of the MiniGPT4 architecture. In addition, we propose a multilingual alignment training stage that uses parallel text corpora to further enhance the multilingual capabilities of our model. M-MiniGPT4 achieves 36% accuracy on the multilingual MMMU benchmark, outperforming state-of-the-art models in the same weight class, including foundation models released after the majority of this work was completed. We open-source our models, code, and translated datasets to facilitate future research in low-resource and multilingual settings.

标签

多语言 视觉语言 MLLM 迁移学习 对齐训练

arXiv 分类

cs.CL cs.AI