Are Multimodal Large Language Models Good Annotators for Image Tagging?
AI 摘要
该论文分析了MLLM在图像标注中的应用潜力,并提出了TagLLM框架提高标注质量。
主要贡献
- 分析MLLM在图像标注中的能力和局限性
- 提出TagLLM框架,包括候选标签生成和标签消歧义两个模块
- 实验证明TagLLM能有效提升MLLM标注质量,缩小与人工标注的差距
方法论
提出TagLLM框架,利用结构化分组提示生成候选标签,交互式校准提示中的语义概念进行标签消歧。
原文摘要
Image tagging, a fundamental vision task, traditionally relies on human-annotated datasets to train multi-label classifiers, which incurs significant labor and costs. While Multimodal Large Language Models (MLLMs) offer promising potential to automate annotation, their capability to replace human annotators remains underexplored. This paper aims to analyze the gap between MLLM-generated and human annotations and to propose an effective solution that enables MLLM-based annotation to replace manual labeling. Our analysis of MLLM annotations reveals that, under a conservative estimate, MLLMs can reduce annotation cost to as low as one-thousandth of the human cost, mainly accounting for GPU usage, which is nearly negligible compared to manual efforts. Their annotation quality reaches about 50\% to 80\% of human performance, while achieving over 90\% performance on downstream training tasks.Motivated by these findings, we propose TagLLM, a novel framework for image tagging, which aims to narrow the gap between MLLM-generated and human annotations. TagLLM comprises two components: Candidates generation, which employs structured group-wise prompting to efficiently produce a compact candidate set that covers as many true labels as possible while reducing subsequent annotation workload; and label disambiguation, which interactively calibrates the semantic concept of categories in the prompts and effectively refines the candidate labels. Extensive experiments show that TagLLM substantially narrows the gap between MLLM-generated and human annotations, especially in downstream training performance, where it closes about 60\% to 80\% of the difference.