Agent Tuning & Optimization 相关度: 8/10

Prism-$Δ$: Differential Subspace Steering for Prompt Highlighting in Large Language Models

Yuyao Ge, Shenghua Liu, Yiwei Wang, Tianyu Liu, Baolong Bi, Lingrui Mei, Jiayu Yao, Jiafeng Guo, Xueqi Cheng
arXiv: 2603.10705v1 发布: 2026-03-11 更新: 2026-03-11

AI 摘要

PRISM-$Δ$通过差异子空间指导Prompt高亮,提升LLM生成质量并降低成本。

主要贡献

  • 提出PRISM-$Δ$方法,分解差异协方差矩阵提取指导方向
  • 使用软性权重调整注意力头的贡献
  • 扩展到Value表示,利用内容通道信号
  • 在多项benchmark上显著提升性能,并降低fluency损失

方法论

PRISM-$Δ$分解正负样本的协方差矩阵差异,寻找区分性最大的方向,并用软性权重调整注意力头的贡献。

原文摘要

Prompt highlighting steers a large language model to prioritize user-specified text spans during generation. A key challenge is extracting steering directions that capture the difference between relevant and irrelevant contexts, rather than shared structural patterns common to both. We propose PRISM-$Δ$ (Projection-based Relevance-Informed Steering Method), which decomposes the difference between positive and negative cross-covariance matrices to maximize discriminative energy while eliminating shared directions. Each attention head receives a continuous softplus importance weight, letting weak-but-useful heads contribute at reduced strength. The framework extends naturally to Value representations, capturing content-channel signal that Key-only methods leave unused. Across four benchmarks and five models, PRISM-$Δ$ matches or exceeds the best existing method on 19 of 20 configurations, with relative gains up to +10.6%, while halving the fluency cost of steering. PRISM-$Δ$ also scales to long-context retrieval, outperforming the best existing method by up to +4.8% relative gain. PRISM-$Δ$ is compatible with FlashAttention and adds negligible memory overhead.

标签

Prompt Engineering Large Language Models Steering Attention Mechanism

arXiv 分类

cs.CL