LLM Reasoning 相关度: 9/10

Statistical Parsing for Logical Information Retrieval

Greg Coppola
arXiv: 2602.12170v1 发布: 2026-02-12 更新: 2026-02-12

AI 摘要

论文扩展了QBBN模型,通过结合LLM和语法解析,实现了自然语言的逻辑信息检索,并提升了推理能力。

主要贡献

  • 扩展QBBN模型,加入否定推理能力
  • 提出一种类型化的逻辑语言和语法解析器
  • 结合LLM和语法解析,提升逻辑信息检索效果

方法论

使用类型化的槽语法将自然语言编译为逻辑形式,利用LLM进行预处理和重排序,并通过QBBN进行推理。

原文摘要

In previous work (Coppola, 2024) we introduced the Quantified Boolean Bayesian Network (QBBN), a logical graphical model that implements the forward fragment of natural deduction (Prawitz, 1965) as a probabilistic factor graph. That work left two gaps: no negation/backward reasoning, and no parser for natural language. This paper addresses both gaps across inference, semantics, and syntax. For inference, we extend the QBBN with NEG factors enforcing P(x) + P(neg x) = 1, enabling contrapositive reasoning (modus tollens) via backward lambda messages, completing Prawitz's simple elimination rules. The engine handles 44/44 test cases spanning 22 reasoning patterns. For semantics, we present a typed logical language with role-labeled predicates, modal quantifiers, and three tiers of expressiveness following Prawitz: first-order quantification, propositions as arguments, and predicate quantification via lambda abstraction. For syntax, we present a typed slot grammar that deterministically compiles sentences to logical form (33/33 correct, zero ambiguity). LLMs handle disambiguation (95% PP attachment accuracy) but cannot produce structured parses directly (12.4% UAS), confirming grammars are necessary. The architecture: LLM preprocesses, grammar parses, LLM reranks, QBBN infers. We argue this reconciles formal semantics with Sutton's "bitter lesson" (2019): LLMs eliminate the annotation bottleneck that killed formal NLP, serving as annotator while the QBBN serves as verifier. Code: https://github.com/gregorycoppola/world

标签

逻辑推理 自然语言处理 语法解析 LLM

arXiv 分类

cs.AI