Multimodal Models Meet Presentation Attack Detection on ID Documents
AI 摘要
研究多模态模型在身份证件PAD中的应用,但实验结果表明效果不佳。
主要贡献
- 探索多模态模型在身份验证PAD中的应用
- 使用预训练模型Paligemma, Llava, Qwen
- 结合视觉和文本模态
方法论
利用预训练多模态模型,融合视觉嵌入和文本元数据,进行身份文档PAD检测。
原文摘要
The integration of multimodal models into Presentation Attack Detection (PAD) for ID Documents represents a significant advancement in biometric security. Traditional PAD systems rely solely on visual features, which often fail to detect sophisticated spoofing attacks. This study explores the combination of visual and textual modalities by utilizing pre-trained multimodal models, such as Paligemma, Llava, and Qwen, to enhance the detection of presentation attacks on ID Documents. This approach merges deep visual embeddings with contextual metadata (e.g., document type, issuer, and date). However, experimental results indicate that these models struggle to accurately detect PAD on ID Documents.