Storing Less, Finding More: How Novelty Filtering Improves Cross-Modal Retrieval on Edge Cameras
AI 摘要
提出一种新型边缘相机跨模态检索架构,通过新颖性过滤提升检索性能。
主要贡献
- 提出基于epsilon-net的边缘设备新颖性过滤器
- 设计跨模态适配器和云端重排序器
- 在真实数据集上验证了架构的有效性
方法论
设计边缘设备新颖性过滤器,筛选关键帧构建索引;通过跨模态适配器和云端重排序器提升检索精度。
原文摘要
Always-on edge cameras generate continuous video streams where redundant frames degrade cross-modal retrieval by crowding correct results out of top-k search. This paper presents a streaming retrieval architecture: an on-device epsilon-net filter retains only semantically novel frames, building a denoised embedding index; a cross-modal adapter and cloud re-ranker compensate for the compact encoder's weak alignment. A single-pass streaming filter outperforms offline alternatives (k-means, farthest-point, uniform, random) across eight vision-language models (8M-632M) on two egocentric datasets (AEA, EPIC-KITCHENS). Combined, the architecture reaches 45.6% Hit@5 on held-out data using an 8M on-device encoder at an estimated 2.7 mW.