FewMMBench: A Benchmark for Multimodal Few-Shot Learning
AI 摘要
FewMMBench基准测试用于评估多模态大语言模型在少样本学习方面的能力。
主要贡献
- 提出了FewMMBench基准,用于评估MLLM的少样本学习能力
- 涵盖了多样的多模态理解任务,例如属性识别和时间推理
- 评估了26个开源MLLM在不同设置下的性能
方法论
构建包含多种多模态任务的数据集,并在零样本、少样本和CoT增强的少样本设置下评估模型性能。
原文摘要
As multimodal large language models (MLLMs) advance in handling interleaved image-text data, assessing their few-shot learning capabilities remains an open challenge. In this paper, we introduce FewMMBench, a comprehensive benchmark designed to evaluate MLLMs under few-shot conditions, with a focus on In-Context Learning (ICL) and Chain-of-Thought (CoT) prompting. Covering a diverse suite of multimodal understanding tasks, from attribute recognition to temporal reasoning, FewMMBench enables systematic analysis across task types, model families, and prompting strategies. We evaluate 26 open-weight MLLMs from six model families across zero-shot, few-shot, and CoT-augmented few-shot settings. Our findings reveal that instruction-tuned models exhibit strong zero-shot performance but benefit minimally, or even regress, with additional demonstrations or CoT reasoning. Retrieval-based demonstrations and increased context size also yield limited gains. These results highlight FewMMBench as a rigorous testbed for diagnosing and advancing few-shot capabilities in multimodal LLMs. The data is available at: https://huggingface.co/datasets/mustafaa/FewMMBench