Pipeline for Verifying LLM-Generated Mathematical Solutions
AI 摘要
提出了一种验证LLM数学解题能力的流水线方法,包括自动和交互式验证。
主要贡献
- 提出一种LLM数学解题的验证流水线
- 使用提示工程生成特定形式的解题方案
- 开源实现,并提供搭建教程
方法论
利用提示工程,引导LLM生成特定形式的解题步骤,并使用证明助手进行验证。
原文摘要
With the growing popularity of Large Reasoning Models and their results in solving mathematical problems, it becomes crucial to measure their capabilities. We introduce a pipeline for both automatic and interactive verification as a more accurate alternative to only checking the answer which is currently the most popular approach for benchmarks. The pipeline can also be used as a generator of correct solutions both in formal and informal languages. 3 AI agents, which can be chosen for the benchmark accordingly, are included in the structure. The key idea is the use of prompts to obtain the solution in the specific form which allows for easier verification using proof assistants and possible use of small models ($\le 8B$). Experiments on several datasets suggest low probability of False Positives. The open-source implementation with instructions on setting up a server is available at https://github.com/LogicEnj/lean4_verification_pipeline.