LLM Reasoning 相关度: 6/10

A principled framework for uncertainty decomposition in TabPFN

Sandra Fortini, Kenyon Ng, Sonia Petrone, Judith Rousseau, Susan Wei
arXiv: 2602.04596v1 发布: 2026-02-04 更新: 2026-02-04

AI 摘要

本文提出了一种TabPFN的不确定性分解框架,并验证了其有效性。

主要贡献

  • 提出了TabPFN的不确定性分解方法
  • 证明了监督设置下的预测CLT
  • 推导了方差估计器,实现了快速计算

方法论

将不确定性分解视为贝叶斯预测推理问题,利用预测CLT推导方差估计器,并进行实验验证。

原文摘要

TabPFN is a transformer that achieves state-of-the-art performance on supervised tabular tasks by amortizing Bayesian prediction into a single forward pass. However, there is currently no method for uncertainty decomposition in TabPFN. Because it behaves, in an idealised limit, as a Bayesian in-context learner, we cast the decomposition challenge as a Bayesian predictive inference (BPI) problem. The main computational tool in BPI, predictive Monte Carlo, is challenging to apply here as it requires simulating unmodeled covariates. We therefore pursue the asymptotic alternative, filling a gap in the theory for supervised settings by proving a predictive CLT under quasi-martingale conditions. We derive variance estimators determined by the volatility of predictive updates along the context. The resulting credible bands are fast to compute, target epistemic uncertainty, and achieve near-nominal frequentist coverage. For classification, we further obtain an entropy-based uncertainty decomposition.

标签

TabPFN 不确定性分解 贝叶斯预测推理 预测CLT

arXiv 分类

stat.ML cs.LG stat.ME