Asymmetric Idiosyncrasies in Multimodal Models
AI 摘要
研究了Caption模型和Text-to-Image模型之间的风格差异,并提出了一种新的量化方法。
主要贡献
- 提出了一种基于分类的框架,用于量化Caption模型的风格特征。
- 发现Caption模型的风格特征在生成的图像中显著消失。
- 分析了图像未能保留Caption中细节、颜色、纹理和对象分布等关键信息的原因。
方法论
通过训练神经网络预测Caption模型来源,比较文本和图像的分类准确率,分析跨模态差异。
原文摘要
In this work, we study idiosyncrasies in the caption models and their downstream impact on text-to-image models. We design a systematic analysis: given either a generated caption or the corresponding image, we train neural networks to predict the originating caption model. Our results show that text classification yields very high accuracy (99.70\%), indicating that captioning models embed distinctive stylistic signatures. In contrast, these signatures largely disappear in the generated images, with classification accuracy dropping to at most 50\% even for the state-of-the-art Flux model. To better understand this cross-modal discrepancy, we further analyze the data and find that the generated images fail to preserve key variations present in captions, such as differences in the level of detail, emphasis on color and texture, and the distribution of objects within a scene. Overall, our classification-based framework provides a novel methodology for quantifying both the stylistic idiosyncrasies of caption models and the prompt-following ability of text-to-image systems.