LLM Reasoning 相关度: 6/10

FoMo X: Modular Explainability Signals for Outlier Detection Foundation Models

Simon Klüttermann, Tim Katzke, Phuong Huong Nguyen, Emmanuel Müller
arXiv: 2603.17570v1 发布: 2026-03-18 更新: 2026-03-18

AI 摘要

FoMo-X框架通过模块化解释性信号,提升了异常检测基础模型的可解释性和可靠性。

主要贡献

  • 提出了FoMo-X模块化框架,增强异常检测基础模型的可解释性。
  • 设计了Severity Head和Uncertainty Head,提供风险分级和置信度度量。
  • 验证了FoMo-X在真实和合成数据集上的有效性和效率。

方法论

通过离线训练的辅助诊断头,将预训练PFN模型的嵌入转换为可解释的风险分级和不确定性度量,实现单次推理。

原文摘要

Tabular foundation models, specifically Prior-Data Fitted Networks (PFNs), have revolutionized outlier detection (OD) by enabling unsupervised zero-shot adaptation to new datasets without training. However, despite their predictive power, these models typically function as opaque black boxes, outputting scalar outlier scores that lack the operational context required for safety-critical decision-making. Existing post-hoc explanation methods are often computationally prohibitive for real-time deployment or fail to capture the epistemic uncertainty inherent in zero-shot inference. In this work, we introduce FoMo-X, a modular framework that equips OD foundation models with intrinsic, lightweight diagnostic capabilities. We leverage the insight that the frozen embeddings of a pretrained PFN backbone already encode rich, context-conditioned relational information. FoMo-X attaches auxiliary diagnostic heads to these embeddings, trained offline using the same generative simulator prior as the backbone. This allows us to distill computationally expensive properties, such as Monte Carlo dropout based epistemic uncertainty, into a deterministic, single-pass inference. We instantiate FoMo-X with two novel heads: a Severity Head that discretizes deviations into interpretable risk tiers, and an Uncertainty Head that provides calibrated confidence measures. Extensive evaluation on synthetic and real-world benchmarks (ADBench) demonstrates that FoMo-X recovers ground-truth diagnostic signals with high fidelity and negligible inference overhead. By bridging the gap between foundation model performance and operational explainability, FoMo-X offers a scalable path toward trustworthy, zero-shot outlier detection.

标签

异常检测 可解释性AI 基础模型 零样本学习

arXiv 分类

cs.LG cs.AI