APRES: An Agentic Paper Revision and Evaluation System
AI 摘要
APRES利用LLM,基于可预测引用次数的评价标准,自动修订论文以提升质量和影响力。
主要贡献
- 提出APRES论文修订和评估系统
- 自动发现预测引用次数的评价标准
- 验证APRES提升论文质量和影响力的有效性
方法论
使用LLM,结合自动发现的评价标准,对论文进行修订,并在不改变核心内容的前提下,提升其可读性和影响力。
原文摘要
Scientific discoveries must be communicated clearly to realize their full potential. Without effective communication, even the most groundbreaking findings risk being overlooked or misunderstood. The primary way scientists communicate their work and receive feedback from the community is through peer review. However, the current system often provides inconsistent feedback between reviewers, ultimately hindering the improvement of a manuscript and limiting its potential impact. In this paper, we introduce a novel method APRES powered by Large Language Models (LLMs) to update a scientific papers text based on an evaluation rubric. Our automated method discovers a rubric that is highly predictive of future citation counts, and integrate it with APRES in an automated system that revises papers to enhance their quality and impact. Crucially, this objective should be met without altering the core scientific content. We demonstrate the success of APRES, which improves future citation prediction by 19.6% in mean averaged error over the next best baseline, and show that our paper revision process yields papers that are preferred over the originals by human expert evaluators 79% of the time. Our findings provide strong empirical support for using LLMs as a tool to help authors stress-test their manuscripts before submission. Ultimately, our work seeks to augment, not replace, the essential role of human expert reviewers, for it should be humans who discern which discoveries truly matter, guiding science toward advancing knowledge and enriching lives.