LLM Reasoning 相关度: 9/10

Reasoning over mathematical objects: on-policy reward modeling and test time aggregation

Pranjal Aggarwal, Marjan Ghazvininejad, Seungone Kim, Ilia Kulikov, Jack Lanchantin, Xian Li, Tianjian Li, Bo Liu, Graham Neubig, Anaelia Ovalle, Swarnadeep Saha, Sainbayar Sukhbaatar, Sean Welleck, Jason Weston, Chenxi Whitehouse, Adina Williams, Jing Xu, Ping Yu, Weizhe Yuan, Jingyu Zhang, Wenting Zhao
arXiv: 2603.18886v1 发布: 2026-03-19 更新: 2026-03-19

AI 摘要

论文提出了一个数学对象推理的框架,包括数据集、训练方法和测试时聚合策略,显著提升了LLM在数学领域的表现。

主要贡献

  • 构建并发布了数学对象推理数据集Principia
  • 提出了使用LLM judges和verifiers的训练方法,特别是on-policy训练
  • 展示了on-policy训练可以扩展测试时计算能力

方法论

构建数据集后,利用LLM judges和verifiers进行on-policy训练,并利用测试时聚合提高性能。

原文摘要

The ability to precisely derive mathematical objects is a core requirement for downstream STEM applications, including mathematics, physics, and chemistry, where reasoning must culminate in formally structured expressions. Yet, current LM evaluations of mathematical and scientific reasoning rely heavily on simplified answer formats such as numerical values or multiple choice options due to the convenience of automated assessment. In this paper we provide three contributions for improving reasoning over mathematical objects: (i) we build and release training data and benchmarks for deriving mathematical objects, the Principia suite; (ii) we provide training recipes with strong LLM-judges and verifiers, where we show that on-policy judge training boosts performance; (iii) we show how on-policy training can also be used to scale test-time compute via aggregation. We find that strong LMs such as Qwen3-235B and o3 struggle on Principia, while our training recipes can bring significant improvements over different LLM backbones, while simultaneously improving results on existing numerical and MCQA tasks, demonstrating cross-format generalization of reasoning abilities.

标签

数学推理 LLM 数据集 on-policy training

arXiv 分类

cs.AI cs.CL