Multimodal Learning 相关度: 9/10

Explicit Logic Channel for Validation and Enhancement of MLLMs on Zero-Shot Tasks

Mei Chee Leong, Ying Gu, Hui Li Tan, Liyuan Li, Nancy Chen
arXiv: 2603.11689v1 发布: 2026-03-12 更新: 2026-03-12

AI 摘要

提出显式逻辑通道验证和增强多模态大语言模型在零样本任务中的表现。

主要贡献

  • 提出显式逻辑通道(ELC)用于验证和增强MLLM。
  • 提出一致性率(CR)用于跨通道验证和模型选择。
  • 实验证明ELC和CR在VLC任务中的有效性。

方法论

构建平行于MLLM的显式逻辑通道,通过LLM、VFM和逻辑推理进行验证和增强。

原文摘要

Frontier Multimodal Large Language Models (MLLMs) exhibit remarkable capabilities in Visual-Language Comprehension (VLC) tasks. However, they are often deployed as zero-shot solution to new tasks in a black-box manner. Validating and understanding the behavior of these models become important for application to new task. We propose an Explicit Logic Channel, in parallel with the black-box model channel, to perform explicit logical reasoning for model validation, selection and enhancement. The frontier MLLM, encapsulating latent vision-language knowledge, can be considered as an Implicit Logic Channel. The proposed Explicit Logic Channel, mimicking human logical reasoning, incorporates a LLM, a VFM, and logical reasoning with probabilistic inference for factual, counterfactual, and relational reasoning over the explicit visual evidence. A Consistency Rate (CR) is proposed for cross-channel validation and model selection, even without ground-truth annotations. Additionally, cross-channel integration further improves performance in zero-shot tasks over MLLMs, grounded with explicit visual evidence to enhance trustworthiness. Comprehensive experiments conducted for two representative VLC tasks, i.e., MC-VQA and HC-REC, on three challenging benchmarks, with 11 recent open-source MLLMs from 4 frontier families. Our systematic evaluations demonstrate the effectiveness of proposed ELC and CR for model validation, selection and improvement on MLLMs with enhanced explainability and trustworthiness.

标签

MLLM VLC Reasoning Explainability Trustworthiness

arXiv 分类

cs.AI