LLM Reasoning 相关度: 9/10

Learning to Generate Formally Verifiable Step-by-Step Logic Reasoning via Structured Formal Intermediaries

Luoxin Chen, Yichi Zhou, Huishuai Zhang
arXiv: 2603.29500v1 发布: 2026-03-31 更新: 2026-03-31

AI 摘要

提出PRoSFI方法,通过形式化验证中间步骤提升LLM推理的可靠性,同时保持准确性。

主要贡献

  • 提出PRoSFI奖励方法,关注推理过程的可靠性
  • 利用形式化验证指导LLM生成可验证的推理步骤
  • 提高了LLM在复杂推理任务中的可信度

方法论

使用结构化的中间步骤,通过形式化验证器对每一步进行验证,仅奖励完全验证通过的推理链。

原文摘要

Large language models (LLMs) have recently demonstrated impressive performance on complex, multi-step reasoning tasks, especially when post-trained with outcome-rewarded reinforcement learning Guo et al. 2025. However, it has been observed that outcome rewards often overlook flawed intermediate steps, leading to unreliable reasoning steps even when final answers are correct. To address this unreliable reasoning, we propose PRoSFI (Process Reward over Structured Formal Intermediates), a novel reward method that enhances reasoning reliability without compromising accuracy. Instead of generating formal proofs directly, which is rarely accomplishable for a modest-sized (7B) model, the model outputs structured intermediate steps aligned with its natural language reasoning. Each step is then verified by a formal prover. Only fully validated reasoning chains receive high rewards. The integration of formal verification guides the model towards generating step-by-step machine-checkable proofs, thereby yielding more credible final answers. PRoSFI offers a simple and effective approach to training trustworthy reasoning models.

标签

LLM Reasoning Formal Verification Reinforcement Learning

arXiv 分类

cs.AI cs.LG