Agent Tuning & Optimization 相关度: 8/10

RbtAct: Rebuttal as Supervision for Actionable Review Feedback Generation

Sihong Wu, Yiling Ma, Yilun Zhao, Tiansheng Hu, Owen Jiang, Manasi Patwardhan, Arman Cohan
arXiv: 2603.09723v1 发布: 2026-03-10 更新: 2026-03-10

AI 摘要

RbtAct提出了一种利用同行评议回复优化LLM生成可操作性反馈的方法,提高AI生成评审的质量。

主要贡献

  • 提出了RbtAct框架,利用回复作为监督信号
  • 提出了视角条件下的段落级评审反馈生成任务
  • 构建了包含75K评审-回复映射的大型数据集RMR-75K

方法论

使用RMR-75K数据集,通过监督微调和偏好优化训练Llama-3.1-8B-Instruct模型,提升评审反馈的可操作性。

原文摘要

Large language models (LLMs) are increasingly used across the scientific workflow, including to draft peer-review reports. However, many AI-generated reviews are superficial and insufficiently actionable, leaving authors without concrete, implementable guidance and motivating the gap this work addresses. We propose RbtAct, which targets actionable review feedback generation and places existing peer review rebuttal at the center of learning. Rebuttals show which reviewer comments led to concrete revisions or specific plans, and which were only defended. Building on this insight, we leverage rebuttal as implicit supervision to directly optimize a feedback generator for actionability. To support this objective, we propose a new task called perspective-conditioned segment-level review feedback generation, in which the model is required to produce a single focused comment based on the complete paper and a specified perspective such as experiments and writing. We also build a large dataset named RMR-75K that maps review segments to the rebuttal segments that address them, with perspective labels and impact categories that order author uptake. We then train the Llama-3.1-8B-Instruct model with supervised fine-tuning on review segments followed by preference optimization using rebuttal derived pairs. Experiments with human experts and LLM-as-a-judge show consistent gains in actionability and specificity over strong baselines while maintaining grounding and relevance.

标签

LLM 同行评议 反馈生成 监督学习 偏好优化

arXiv 分类

cs.CL cs.AI