LLM Reasoning 相关度: 7/10

Generalized Bayes for Causal Inference

Emil Javurek, Dennis Frauen, Yuxin Wang, Stefan Feuerriegel
arXiv: 2603.03035v1 发布: 2026-03-03 更新: 2026-03-03

AI 摘要

提出一种因果推断的广义贝叶斯框架,提供对因果效应的可靠不确定性量化。

主要贡献

  • 提出用于因果推断的广义贝叶斯框架
  • 无需显式似然建模,直接对因果估计量设置先验
  • 将基于损失的因果估计器转化为具有完整不确定性量化的估计器

方法论

通过对因果估计量施加先验,并使用基于识别的损失函数更新,从而避免显式似然建模,实现广义后验推断。

原文摘要

Uncertainty quantification is central to many applications of causal machine learning, yet principled Bayesian inference for causal effects remains challenging. Standard Bayesian approaches typically require specifying a probabilistic model for the data-generating process, including high-dimensional nuisance components such as propensity scores and outcome regressions. Standard posteriors are thus vulnerable to strong modeling choices, including complex prior elicitation. In this paper, we propose a generalized Bayesian framework for causal inference. Our framework avoids explicit likelihood modeling; instead, we place priors directly on the causal estimands and update these using an identification-driven loss function, which yields generalized posteriors for causal effects. As a result, our framework turns existing loss-based causal estimators into estimators with full uncertainty quantification. Our framework is flexible and applicable to a broad range of causal estimands (e.g., ATE, CATE). Further, our framework can be applied on top of state-of-the-art causal machine learning pipelines (e.g., Neyman-orthogonal meta-learners). For Neyman-orthogonal losses, we show that the generalized posteriors converge to their oracle counterparts and remain robust to first-stage nuisance estimation error. With calibration, we thus obtain valid frequentist uncertainty even when nuisance estimators converge at slower-than-parametric rates. Empirically, we demonstrate that our proposed framework offers causal effect estimation with calibrated uncertainty across several causal inference settings. To the best of our knowledge, this is the first flexible framework for constructing generalized Bayesian posteriors for causal machine learning.

标签

因果推断 贝叶斯推断 不确定性量化 机器学习

arXiv 分类

stat.ML cs.LG