HorizonMath: Measuring AI Progress Toward Mathematical Discovery with Automatic Verification
AI 摘要
HorizonMath提出了一个自动验证数学发现能力的基准,并发现了GPT的潜在新贡献。
主要贡献
- 提出了HorizonMath基准,包含超过100个未解决的数学问题
- 开发了自动验证框架,可以高效验证数学问题的解
- 发现了GPT 5.4 Pro在两个问题上优于现有结果的潜在新解
方法论
构建一个由未解数学问题组成的benchmark,利用自动验证框架对模型生成的解决方案进行评估。
原文摘要
Can AI make progress on important, unsolved mathematical problems? Large language models are now capable of sophisticated mathematical and scientific reasoning, but whether they can perform novel research is still widely debated and underexplored. We introduce HorizonMath, a benchmark of over 100 predominantly unsolved problems spanning 8 domains in computational and applied mathematics, paired with an open-source evaluation framework for automated verification. Our benchmark targets a class of problems where discovery is hard, requiring meaningful mathematical insight, but verification is computationally efficient and simple. Because these solutions are unknown, HorizonMath is immune to data contamination, and most state-of-the-art models score near 0%. Existing research-level benchmarks instead rely on formal proof verification or manual review, both of which are expensive to scale. Using this platform, we find two problems for which GPT 5.4 Pro proposes solutions that improve on the best-known published results, representing potential novel contributions (pending expert review). We release HorizonMath as an open challenge and a growing community resource, where correct solutions to problems in the unsolved problem classes could constitute novel results in the mathematical literature.