AI Agents 相关度: 9/10

One Model Is Enough: Native Retrieval Embeddings from LLM Agent Hidden States

Bo Jiang
arXiv: 2603.08429v1 发布: 2026-03-09 更新: 2026-03-09

AI 摘要

提出一种新的LLM Agent检索方法,通过投影层将LLM隐状态直接映射到embedding空间,无需单独的embedding模型。

主要贡献

  • 提出了一种新的Native Retrieval方法
  • 减少了检索流程中的模型数量,降低了复杂度和延迟
  • 在QReCC数据集上验证了该方法的有效性

方法论

通过添加轻量级投影头,将LLM隐状态映射到embedding空间,并结合对齐、对比和排序蒸馏损失进行训练。

原文摘要

LLM agents that retrieve external knowledge typically generate a search query as text, then run a separate embedding model to encode it into a vector. This two-model pipeline adds infrastructure complexity and latency, yet is redundant: the LLM already encodes the full conversational context in its hidden states. We propose equipping LLM agents with native retrieval capability by adding a lightweight projection head that maps hidden states directly into the embedding space, eliminating the need for a separate embedding model. Trained with a combination of alignment, contrastive, and rank distillation losses, our method retains 97\% of baseline retrieval quality while enabling the LLM agent to search with its own representations. Experiments on the QReCC conversational search benchmark show competitive Recall@10 and MRR@10 compared to the standard generate-then-encode pipeline, with systematic ablations confirming the contribution of each loss component.

标签

LLM Agent Retrieval Embedding Hidden State

arXiv 分类

cs.CL cs.AI cs.IR