AutoViVQA: A Large-Scale Automatically Constructed Dataset for Vietnamese Visual Question Answering
AI 摘要
论文提出了一个大规模自动构建的越南语视觉问答数据集,并探索了Transformer架构。
主要贡献
- 构建大规模越南语VQA数据集
- 基于Transformer架构探索越南语VQA
- 比较多语言环境下自动评估指标
方法论
利用文本和视觉预训练的Transformer架构,系统地比较多语言环境下的自动评估指标。
原文摘要
Visual Question Answering (VQA) is a fundamental multimodal task that requires models to jointly understand visual and textual information. Early VQA systems relied heavily on language biases, motivating subsequent work to emphasize visual grounding and balanced datasets. With the success of large-scale pre-trained transformers for both text and vision domains -- such as PhoBERT for Vietnamese language understanding and Vision Transformers (ViT) for image representation learning -- multimodal fusion has achieved remarkable progress. For Vietnamese VQA, several datasets have been introduced to promote research in low-resource multimodal learning, including ViVQA, OpenViVQA, and the recently proposed ViTextVQA. These resources enable benchmarking of models that integrate linguistic and visual features in the Vietnamese context. Evaluation of VQA systems often employs automatic metrics originally designed for image captioning or machine translation, such as BLEU, METEOR, CIDEr, Recall, Precision, and F1-score. However, recent research suggests that large language models can further improve the alignment between automatic evaluation and human judgment in VQA tasks. In this work, we explore Vietnamese Visual Question Answering using transformer-based architectures, leveraging both textual and visual pre-training while systematically comparing automatic evaluation metrics under multilingual settings.