LLM Reasoning 相关度: 9/10

Lifted Relational Probabilistic Inference via Implicit Learning

Luise Ge, Brendan Juba, Kris Nilsson, Alison Shao
arXiv: 2602.14890v1 发布: 2026-02-16 更新: 2026-02-16

AI 摘要

提出了一种隐式学习的一阶关系概率推理框架,实现无需显式模型构建的概率查询。

主要贡献

  • 提出基于隐式学习的一阶关系概率推理方法
  • 实现了 grounding-lift 和 world-lift 两种提升
  • 首次实现多项式时间复杂度的一阶概率逻辑学习和推理

方法论

将不完整的公理和部分观测样本合并为SOS层级,并行执行grounding-lift和world-lift,获得全局界限。

原文摘要

Reconciling the tension between inductive learning and deductive reasoning in first-order relational domains is a longstanding challenge in AI. We study the problem of answering queries in a first-order relational probabilistic logic through a joint effort of learning and reasoning, without ever constructing an explicit model. Traditional lifted inference assumes access to a complete model and exploits symmetry to evaluate probabilistic queries; however, learning such models from partial, noisy observations is intractable in general. We reconcile these two challenges through implicit learning to reason and first-order relational probabilistic inference techniques. More specifically, we merge incomplete first-order axioms with independently sampled, partially observed examples into a bounded-degree fragment of the sum-of-squares (SOS) hierarchy in polynomial time. Our algorithm performs two lifts simultaneously: (i) grounding-lift, where renaming-equivalent ground moments share one variable, collapsing the domain of individuals; and (ii) world-lift, where all pseudo-models (partial world assignments) are enforced in parallel, producing a global bound that holds across all worlds consistent with the learned constraints. These innovations yield the first polynomial-time framework that implicitly learns a first-order probabilistic logic and performs lifted inference over both individuals and worlds.

标签

概率逻辑 关系推理 隐式学习 Lifted Inference

arXiv 分类

cs.AI