Ego: Embedding-Guided Personalization of Vision-Language Models
AI 摘要
提出一种高效的视觉语言模型个性化方法Ego,通过内部注意力机制提取视觉token,实现概念记忆和描述。
主要贡献
- 提出了一种基于视觉token的个性化方法
- 无需额外训练,提升效率和泛化性
- 在多种个性化场景下验证了有效性
方法论
利用模型内部注意力机制提取视觉token作为个性化概念的记忆,指导模型描述图像内容。
原文摘要
AI assistants that support humans in daily life are becoming increasingly feasible, driven by the rapid advancements in multimodal language models. A key challenge lies in overcoming the generic nature of these models to deliver personalized experiences. Existing approaches to personalizing large vision language models often rely on additional training stages, which limit generality and scalability, or on engineered pipelines with external pre-trained modules, which hinder deployment efficiency. In this work, we propose an efficient personalization method that leverages the model's inherent ability to capture personalized concepts. Specifically, we extract visual tokens that predominantly represent the target concept by utilizing the model's internal attention mechanisms. These tokens serve as a memory of that specific concept, enabling the model to recall and describe it when it appears in test images. We conduct a comprehensive and unified evaluation of our approach and SOTA methods across various personalization settings including single-concept, multi-concept, and video personalization, demonstrating strong performance gains with minimal personalization overhead.