LLM Reasoning 相关度: 9/10

When and Why Does Unsupervised RL Succeed in Mathematical Reasoning? A Manifold Envelopment Perspective

Zelin Zhang, Fei Cheng, Chenhui Chu
arXiv: 2603.16578v1 发布: 2026-03-17 更新: 2026-03-17

AI 摘要

该论文研究了无监督RL提升LLM数学推理能力,并揭示其成功和失败的原因。

主要贡献

  • 设计和评估了一系列促进简洁和确定性生成的内在奖励
  • 揭示了基础模型的逻辑先验如何影响无监督RL的成功或失败
  • 引入了一种几何诊断方法来解释训练过程中的稳定性和崩溃现象

方法论

通过设计内在奖励,结合不同推理能力的基模型,并使用几何诊断分析其训练过程。

原文摘要

Although outcome-based reinforcement learning (RL) significantly advances the mathematical reasoning capabilities of Large Language Models (LLMs), its reliance on computationally expensive ground-truth annotations imposes a severe scalability bottleneck. Unsupervised RL guided by intrinsic rewards offers a scalable alternative, yet it suffers from opaque training dynamics and catastrophic instability, such as policy collapse and reward hacking. In this paper, we first design and evaluate a suite of intrinsic rewards that explicitly enforce concise and certain generation. Second, to discover the boundaries of this approach, we test base models across a spectrum of intrinsic reasoning capabilities, revealing how a model's foundational logical prior dictates its success or failure. Finally, to demystify why certain configurations stabilize while others collapse, we introduce a novel geometric diagnostic lens, showing that successful cases are enveloped by manifolds. Ultimately, our work goes beyond merely demonstrating that enforcing concise and certain responses successfully boosts mathematical reasoning; we reveal when this unsupervised approach breaks down and geometrically diagnose why.

标签

无监督RL LLM 数学推理 内在奖励 几何诊断

arXiv 分类

cs.LG cs.CL