AI Agents 相关度: 7/10

What Do We Care About in Bandits with Noncompliance? BRACE: Bandits with Recommendations, Abstention, and Certified Effects

Nicolás Della Penna
arXiv: 2603.09532v1 发布: 2026-03-10 更新: 2026-03-10

AI 摘要

研究非依从性Bandits问题,提出BRACE算法,权衡推荐福利与控制学习目标。

主要贡献

  • 形式化了目标选择问题
  • 提出了参数无关的BRACE算法
  • 推导了半参数IV推断的正交分数

方法论

提出了BRACE算法,通过矩阵认证进行IV反演,并返回结构区间。同时推导正交分数。

原文摘要

Bandits with noncompliance separate the learner's recommendation from the treatment actually delivered, so the learning target itself must be chosen. A platform may care about recommendation welfare in the current mediated workflow, treatment learning for a future direct-control regime, or anytime-valid uncertainty for one of those targets. These objectives need not agree. We formalize this objective-choice problem, identify the direct-control regime in which recommendation and treatment objectives collapse, and show by example that recommendation welfare can strictly exceed every learner-measurable treatment policy when downstream actors use private information. For finite-context square-IV problems we propose BRACE, a parameter-free phase-doubling algorithm that performs IV inversion only after matrix certification and otherwise returns full-range but honest structural intervals. BRACE delivers simultaneous policy-value validity, fixed-gap identification of the operationally optimal recommendation policy, and fixed-gap identification of the structurally optimal treatment policy under contextual homogeneity and invertibility. We complement the theory with a finite-context empirical benchmark spanning direct control, mediated present-versus-future tradeoffs, weak identification, homogeneity failure, and rectangular overidentification. The experiments show that safety appears as regret on easy problems, as abstention and wide valid intervals under weak identification, as a reason to prefer recommendation welfare under homogeneity failure, and as tighter structural uncertainty when extra instruments are available. For rich contexts, we also derive an orthogonal score whose conditional bias factorizes into compliance-model and outcome-model errors, clarifying what must be stabilized for anytime-valid semiparametric IV inference.

标签

Bandit算法 因果推断 工具变量 推荐系统 不依从性

arXiv 分类

stat.ML cs.LG