Multimodal Learning 相关度: 9/10

Jagle: Building a Large-Scale Japanese Multimodal Post-Training Dataset for Vision-Language Models

Issa Sugiura, Keito Sasagawa, Keisuke Nakao, Koki Maeda, Ziqi Yin, Zhishen Yang, Shuhei Kurita, Yusuke Oda, Ryoko Tokuhisa, Daisuke Kawahara, Naoaki Okazaki
arXiv: 2604.02048v1 发布: 2026-04-02 更新: 2026-04-02

AI 摘要

提出了Jagle,一个大规模日语多模态后训练数据集,用于提升VLM在日语任务上的性能。

主要贡献

  • 构建了迄今为止最大的日语多模态后训练数据集Jagle
  • 提出了异构数据源的VQA pair生成方法,包括VLM生成、翻译和文本渲染
  • 证明了使用Jagle训练的模型在日语任务上表现优异,且不影响英语性能

方法论

收集图像、图像文本对和PDF文档等异构数据,利用VLM生成、翻译等策略构建VQA pairs,用于VLM的后训练。

原文摘要

Developing vision-language models (VLMs) that generalize across diverse tasks requires large-scale training datasets with diverse content. In English, such datasets are typically constructed by aggregating and curating numerous existing visual question answering (VQA) resources. However, this strategy does not readily extend to other languages, where VQA datasets remain limited in both scale and domain coverage, posing a major obstacle to building high-quality multilingual and non-English VLMs. In this work, we introduce Jagle, the largest Japanese multimodal post-training dataset to date, comprising approximately 9.2 million instances across diverse tasks. Rather than relying on existing VQA datasets, we collect heterogeneous source data, including images, image-text pairs, and PDF documents, and generate VQA pairs through multiple strategies such as VLM-based QA generation, translation, and text rendering. Experiments demonstrate that a 2.2B model trained with Jagle achieves strong performance on Japanese tasks, surpassing InternVL3.5-2B in average score across ten Japanese evaluation tasks and approaching within five points of Qwen3-VL-2B-Instruct. Furthermore, combining Jagle with FineVision does not degrade English performance; instead, it improves English performance compared to training with FineVision alone. To facilitate reproducibility and future research, we release the dataset, trained models, and code.

标签

多模态学习 视觉语言模型 日语 数据集

arXiv 分类

cs.CV