Multimodal Learning 相关度: 9/10

Can Vision Replace Text in Working Memory? Evidence from Spatial n-Back in Vision-Language Models

Sichu Liang, Hongyu Zhu, Wenwen Wang, Deyu Zhou
arXiv: 2602.04355v1 发布: 2026-02-04 更新: 2026-02-04

AI 摘要

该论文比较了视觉语言模型在文本和图像形式的空间n-back任务中的工作记忆表现,发现文本形式表现更优。

主要贡献

  • 评估了视觉语言模型在视觉和文本空间n-back任务中的性能差异
  • 分析了模型在不同任务中的错误模式和干扰因素
  • 揭示了任务参数对模型工作记忆表现的影响

方法论

使用Qwen2.5和Qwen2.5-VL模型,在文本渲染和图像渲染的空间n-back任务中进行实验,并分析其准确率、d'值和试次对数概率证据。

原文摘要

Working memory is a central component of intelligent behavior, providing a dynamic workspace for maintaining and updating task-relevant information. Recent work has used n-back tasks to probe working-memory-like behavior in large language models, but it is unclear whether the same probe elicits comparable computations when information is carried in a visual rather than textual code in vision-language models. We evaluate Qwen2.5 and Qwen2.5-VL on a controlled spatial n-back task presented as matched text-rendered or image-rendered grids. Across conditions, models show reliably higher accuracy and d' with text than with vision. To interpret these differences at the process level, we use trial-wise log-probability evidence and find that nominal 2/3-back often fails to reflect the instructed lag and instead aligns with a recency-locked comparison. We further show that grid size alters recent-repeat structure in the stimulus stream, thereby changing interference and error patterns. These results motivate computation-sensitive interpretations of multimodal working memory.

标签

工作记忆 视觉语言模型 n-back 多模态 Qwen

arXiv 分类

cs.CL