LLM Reasoning 相关度: 8/10

From Isolated Scoring to Collaborative Ranking: A Comparison-Native Framework for LLM-Based Paper Evaluation

Pujun Zheng, Jiacheng Yao, Jinquan Zheng, Chenyang Gu, Guoxiu He, Jiawei Liu, Yong Huang, Tianrui Guo, Wei Lu
arXiv: 2603.17588v1 发布: 2026-03-18 更新: 2026-03-18

AI 摘要

提出了一个基于LLM的论文评价框架CNPE,通过比较进行论文质量排序,提升评价的鲁棒性和泛化性。

主要贡献

  • 提出Comparison-Native的论文评价框架CNPE。
  • 设计基于图的相似性排序算法,用于采样更具信息量的论文对。
  • 通过监督微调和强化学习,增强了相对质量判断。

方法论

构建基于图的相似性排序,使用对比学习和强化学习微调LLM,进行成对比较和全局排序。

原文摘要

Large language models (LLMs) are currently applied to scientific paper evaluation by assigning an absolute score to each paper independently. However, since score scales vary across conferences, time periods, and evaluation criteria, models trained on absolute scores are prone to fitting narrow, context-specific rules rather than developing robust scholarly judgment. To overcome this limitation, we propose shifting paper evaluation from isolated scoring to collaborative ranking. In particular, we design \textbf{C}omparison-\textbf{N}ative framework for \textbf{P}aper \textbf{E}valuation (\textbf{CNPE}), integrating comparison into both data construction and model learning. We first propose a graph-based similarity ranking algorithm to facilitate the sampling of more informative and discriminative paper pairs from a collection. We then enhance relative quality judgment through supervised fine-tuning and reinforcement learning with comparison-based rewards. At inference, the model performs pairwise comparisons over sampled paper pairs and aggregates these preference signals into a global relative quality ranking. Experimental results demonstrate that our framework achieves an average relative improvement of \textbf{21.8\%} over the strong baseline DeepReview-14B, while exhibiting robust generalization to five previously unseen datasets. \href{https://github.com/ECNU-Text-Computing/ComparisonReview}{Code}.

标签

LLM Paper Evaluation Ranking Comparison Learning

arXiv 分类

cs.IR cs.CL