Video-Only ToM: Enhancing Theory of Mind in Multimodal Large Language Models
AI 摘要
该论文提出VisionToM框架,通过干预视觉表征提升MLLM的视频理解ToM能力。
主要贡献
- 提出VisionToM框架,用于提升MLLM的视频理解ToM能力
- 通过干预视觉表征,引导模型关注正确语义目标,减少对语言先验的依赖
- 在EgoToM数据集上验证了方法有效性,并提升了开放式生成任务的性能
方法论
通过计算干预向量,对齐视觉表征和语义目标,引导模型关注视觉特征的不同层级,提升ToM推理能力。
原文摘要
As large language models (LLMs) continue to advance, there is increasing interest in their ability to infer human mental states and demonstrate a human-like Theory of Mind (ToM). Most existing ToM evaluations, however, are centered on text-based inputs, while scenarios relying solely on visual information receive far less attention. This leaves a gap, since real-world human-AI interaction typically requires multimodal understanding. In addition, many current methods regard the model as a black box and rarely probe how its internal attention behaves in multiple-choice question answering (QA). The impact of LLM hallucinations on such tasks is also underexplored from an interpretability perspective. To address these issues, we introduce VisionToM, a vision-oriented intervention framework designed to strengthen task-aware reasoning. The core idea is to compute intervention vectors that align visual representations with the correct semantic targets, thereby steering the model's attention through different layers of visual features. This guidance reduces the model's reliance on spurious linguistic priors, leading to more reliable multimodal language model (MLLM) outputs and better QA performance. Experiments on the EgoToM benchmark-an egocentric, real-world video dataset for ToM with three multiple-choice QA settings-demonstrate that our method substantially improves the ToM abilities of MLLMs. Furthermore, results on an additional open-ended generation task show that VisionToM enables MLLMs to produce free-form explanations that more accurately capture agents' mental states, pushing machine-human collaboration toward greater alignment.