BLITZRANK: Principled Zero-shot Ranking Agents with Tournament Graphs
AI 摘要
提出了基于tournament graph的LLM zero-shot reranking框架,提高了效率和准确率。
主要贡献
- 提出tournament graph reranking框架
- 设计信息增益最大化查询策略
- 处理非传递偏好,实现分层排序
方法论
将k-wise reranking转化为tournament graph,利用传递闭包扩展排序信息,并贪婪地选择查询。
原文摘要
Large language models have emerged as powerful zero-shot rerankers for retrieval-augmented generation, offering strong generalization without task-specific training. However, existing LLM reranking methods either rely on heuristics that fail to fully exploit the information revealed by each ranking decision or are inefficient when they do. We introduce a tournament graph framework that provides a principled foundation for $k$-wise reranking. Our key observation is that each $k$-document comparison reveals a complete tournament of $\binom{k}{2}$ pairwise preferences. These tournaments are aggregated into a global preference graph, whose transitive closure yields many additional orderings without further model invocations. We formalize when a candidate's rank is certifiably determined and design a query schedule that greedily maximizes information gain towards identifying the top-$m$ items. Our framework also gracefully handles non-transitive preferences - cycles induced by LLM judgments - by collapsing them into equivalence classes that yield principled tiered rankings. Empirically, across 14 benchmarks and 5 LLMs, our method achieves Pareto dominance over existing methods: matching or exceeding accuracy while requiring 25-40% fewer tokens than comparable approaches, and 7$\times$ fewer than pairwise methods at near-identical quality.