SUPERGLASSES: Benchmarking Vision Language Models as Intelligent Agents for AI Smart Glasses
AI 摘要
提出了首个智能眼镜VQA基准SUPERGLASSES,并构建了检索增强的智能眼镜Agent SUPERLENS。
主要贡献
- 构建了首个基于真实智能眼镜数据的VQA基准数据集SUPERGLASSES
- 评估了26个VLM模型在该基准上的表现,揭示了现有模型的局限性
- 提出了智能眼镜Agent SUPERLENS,通过检索增强提升VQA性能
方法论
构建真实智能眼镜VQA数据集,使用自动对象检测、查询解耦和多模态网络搜索进行检索增强的答案生成。
原文摘要
The rapid advancement of AI-powered smart glasses, one of the hottest wearable devices, has unlocked new frontiers for multimodal interaction, with Visual Question Answering (VQA) over external knowledge sources emerging as a core application. Existing Vision Language Models (VLMs) adapted to smart glasses are typically trained and evaluated on traditional multimodal datasets; however, these datasets lack the variety and realism needed to reflect smart glasses usage scenarios and diverge from their specific challenges, where accurately identifying the object of interest must precede any external knowledge retrieval. To bridge this gap, we introduce SUPERGLASSES, the first comprehensive VQA benchmark built on real-world data entirely collected by smart glasses devices. SUPERGLASSES comprises 2,422 egocentric image-question pairs spanning 14 image domains and 8 query categories, enriched with full search trajectories and reasoning annotations. We evaluate 26 representative VLMs on this benchmark, revealing significant performance gaps. To address the limitations of existing models, we further propose SUPERLENS, a multimodal smart glasses agent that enables retrieval-augmented answer generation by integrating automatic object detection, query decoupling, and multimodal web search. Our agent achieves state-of-the-art performance, surpassing GPT-4o by 2.19 percent, and highlights the need for task-specific solutions in smart glasses VQA scenarios.