Agent Tuning & Optimization 相关度: 9/10

Position: Vector Prompt Interfaces Should Be Exposed to Enable Customization of Large Language Models

Liangwei Yang, Shiyu Wang, Haolin Chen, Rithesh Murthy, Ming Zhu, Jielin Qiu, Zixiang Chen, Juntao Tan, Jianguo Zhang, Zhiwei Liu, Wenting Zhao, Silvio Savarese, Caiming Xiong, Huan Wang, Shelby Heinecke
arXiv: 2603.04292v1 发布: 2026-03-04 更新: 2026-03-04

AI 摘要

论文提出开放向量Prompt接口以提升LLM定制能力,优于文本Prompt,并讨论了安全性和应用前景。

主要贡献

  • 提出开放向量Prompt接口的必要性
  • 论证向量Prompt优于文本Prompt的证据
  • 讨论推理期定制的重要性
  • 探讨开放向量Prompt接口的安全性

方法论

通过实验对比向量Prompt和文本Prompt的优化效果,并分析其内部机制。

原文摘要

As large language models (LLMs) transition from research prototypes to real-world systems, customization has emerged as a central bottleneck. While text prompts can already customize LLM behavior, we argue that text-only prompting does not constitute a suitable control interface for scalable, stable, and inference-only customization. This position paper argues that model providers should expose \emph{vector prompt inputs} as part of the public interface for customizing LLMs. We support this position with diagnostic evidence showing that vector prompt tuning continues to improve with increasing supervision whereas text-based prompt optimization saturates early, and that vector prompts exhibit dense, global attention patterns indicative of a distinct control mechanism. We further discuss why inference-only customization is increasingly important under realistic deployment constraints, and why exposing vector prompts need not fundamentally increase model leakage risk under a standard black-box threat model. We conclude with a call to action for the community to rethink prompt interfaces as a core component of LLM customization.

标签

LLM Prompt Engineering Customization Vector Prompt Inference

arXiv 分类

cs.CL