Mitigating the Likelihood Paradox in Flow-based OOD Detection via Entropy Manipulation
AI 摘要
通过熵操作缓解Flow模型OOD检测中的似然悖论,提高OOD检测性能。
主要贡献
- 提出了一种基于语义相似性的熵操作方法
- 理论分析证明该方法可增大ID和OOD样本的似然差距
- 在标准基准测试中验证了该方法的有效性
方法论
通过计算输入与In-distribution memory bank的语义相似性,对输入进行熵控制,减少OOD样本的似然。
原文摘要
Deep generative models that can tractably compute input likelihoods, including normalizing flows, often assign unexpectedly high likelihoods to out-of-distribution (OOD) inputs. We mitigate this likelihood paradox by manipulating input entropy based on semantic similarity, applying stronger perturbations to inputs that are less similar to an in-distribution memory bank. We provide a theoretical analysis showing that entropy control increases the expected log-likelihood gap between in-distribution and OOD samples in favor of the in-distribution, and we explain why the procedure works without any additional training of the density model. We then evaluate our method against likelihood-based OOD detectors on standard benchmarks and find consistent AUROC improvements over baselines, supporting our explanation.