LLM Reasoning 相关度: 6/10

Mitigating the Likelihood Paradox in Flow-based OOD Detection via Entropy Manipulation

Donghwan Kim, Hyunsoo Yoon
arXiv: 2602.09581v1 发布: 2026-02-10 更新: 2026-02-10

AI 摘要

通过熵操作缓解Flow模型OOD检测中的似然悖论,提高OOD检测性能。

主要贡献

  • 提出了一种基于语义相似性的熵操作方法
  • 理论分析证明该方法可增大ID和OOD样本的似然差距
  • 在标准基准测试中验证了该方法的有效性

方法论

通过计算输入与In-distribution memory bank的语义相似性,对输入进行熵控制,减少OOD样本的似然。

原文摘要

Deep generative models that can tractably compute input likelihoods, including normalizing flows, often assign unexpectedly high likelihoods to out-of-distribution (OOD) inputs. We mitigate this likelihood paradox by manipulating input entropy based on semantic similarity, applying stronger perturbations to inputs that are less similar to an in-distribution memory bank. We provide a theoretical analysis showing that entropy control increases the expected log-likelihood gap between in-distribution and OOD samples in favor of the in-distribution, and we explain why the procedure works without any additional training of the density model. We then evaluate our method against likelihood-based OOD detectors on standard benchmarks and find consistent AUROC improvements over baselines, supporting our explanation.

标签

OOD检测 生成模型 Flow模型 熵操作

arXiv 分类

cs.LG cs.AI