Who can we trust? LLM-as-a-jury for Comparative Assessment
AI 摘要
该论文提出BT-sigma模型,通过判断LLM判决可靠性,提升LLM评估NLG质量的准确性。
主要贡献
- 提出BT-sigma模型,用于评估LLM判决可靠性
- 验证了LLM判决存在不一致性,影响ranking效果
- 证明BT-sigma优于平均聚合方法
方法论
提出BT-sigma模型,是Bradley-Terry模型的扩展,为每个judge引入discriminator参数,联合推断item ranking和judge可靠性。
原文摘要
Large language models (LLMs) are increasingly applied as automatic evaluators for natural language generation assessment often using pairwise comparative judgements. Existing approaches typically rely on single judges or aggregate multiple judges assuming equal reliability. In practice, LLM judges vary substantially in performance across tasks and aspects, and their judgment probabilities may be biased and inconsistent. Furthermore, human-labelled supervision for judge calibration may be unavailable. We first empirically demonstrate that inconsistencies in LLM comparison probabilities exist and show that it limits the effectiveness of direct probability-based ranking. To address this, we study the LLM-as-a-jury setting and propose BT-sigma, a judge-aware extension of the Bradley-Terry model that introduces a discriminator parameter for each judge to jointly infer item rankings and judge reliability from pairwise comparisons alone. Experiments on benchmark NLG evaluation datasets show that BT-sigma consistently outperforms averaging-based aggregation methods, and that the learned discriminator strongly correlates with independent measures of the cycle consistency of LLM judgments. Further analysis reveals that BT-sigma can be interpreted as an unsupervised calibration mechanism that improves aggregation by modelling judge reliability.