LLM Reasoning 相关度: 7/10

Towards Position-Robust Talent Recommendation via Large Language Models

Silin Du, Hongyan Liu
arXiv: 2604.02200v1 发布: 2026-04-02 更新: 2026-04-02

AI 摘要

该论文提出L3TR框架,通过块注意力、位置编码和ID采样,提升LLM在人才推荐中的效果并减轻位置偏差。

主要贡献

  • 提出了一个名为L3TR的列表式人才推荐框架
  • 设计了块注意力机制和局部位置编码方法,以增强文档间处理并减轻位置偏差和并发token偏差
  • 引入ID采样方法,解决训练和推理阶段候选集大小不一致的问题

方法论

利用LLM的潜在输出,通过块注意力、局部位置编码和ID采样,解决位置偏差和候选集大小不一致问题。

原文摘要

Talent recruitment is a critical, yet costly process for many industries, with high recruitment costs and long hiring cycles. Existing talent recommendation systems increasingly adopt large language models (LLMs) due to their remarkable language understanding capabilities. However, most prior approaches follow a pointwise paradigm, which requires LLMs to repeatedly process some text and fails to capture the relationships among candidates in the list, resulting in higher token consumption and suboptimal recommendations. Besides, LLMs exhibit position bias and the lost-in-the-middle issue when answering multiple-choice questions and processing multiple long documents. To address these issues, we introduce an implicit strategy to utilize LLM's potential output for the recommendation task and propose L3TR, a novel framework for listwise talent recommendation with LLMs. In this framework, we propose a block attention mechanism and a local positional encoding method to enhance inter-document processing and mitigate the position bias and concurrent token bias issue. We also introduce an ID sampling method for resolving the inconsistency between candidate set sizes in the training phase and the inference phase. We design evaluation methods to detect position bias and token bias and training-free debiasing methods. Extensive experiments on two real-world datasets validated the effectiveness of L3TR, showing consistent improvements over existing baselines.

标签

人才推荐 LLM 位置偏差 列表式学习

arXiv 分类

cs.CL