Multimodal Learning 相关度: 9/10

Is Information Density Uniform when Utterances are Grounded on Perception and Discourse?

Matteo Gay, Coleman Haley, Mario Giulianelli, Edoardo Ponti
arXiv: 2602.14653v1 发布: 2026-02-16 更新: 2026-02-16

AI 摘要

研究发现视觉和语篇 grounding 使信息分布更均匀,支持上下文相关的UID假设。

主要贡献

  • 首次在视觉 grounding 环境下验证 UID 假设
  • 使用多语言视觉语言模型在多种语言上进行实验
  • 发现视觉和语篇 grounding 增加了信息均匀性

方法论

使用多语言视觉语言模型,在图像-文本和视觉叙事数据上估计 surprisal,并分析信息密度。

原文摘要

The Uniform Information Density (UID) hypothesis posits that speakers are subject to a communicative pressure to distribute information evenly within utterances, minimising surprisal variance. While this hypothesis has been tested empirically, prior studies are limited exclusively to text-only inputs, abstracting away from the perceptual context in which utterances are produced. In this work, we present the first computational study of UID in visually grounded settings. We estimate surprisal using multilingual vision-and-language models over image-caption data in 30 languages and visual storytelling data in 13 languages, together spanning 11 families. We find that grounding on perception consistently smooths the distribution of information, increasing both global and local uniformity across typologically diverse languages compared to text-only settings. In visual narratives, grounding in both image and discourse contexts has additional effects, with the strongest surprisal reductions occurring at the onset of discourse units. Overall, this study takes a first step towards modelling the temporal dynamics of information flow in ecologically plausible, multimodal language use, and finds that grounded language exhibits greater information uniformity, supporting a context-sensitive formulation of UID.

标签

Uniform Information Density Multimodal Learning Vision-Language Surprisal Grounded Language

arXiv 分类

cs.CL