LLM Reasoning 相关度: 8/10

Do Emotions in Prompts Matter? Effects of Emotional Framing on Large Language Models

Minda Zhao, Yutong Yang, Chufei Peng, Rachel Gonsalves, Weiyue Li, Ruyi Yang, Zhixi Liu, Mengyu Wang
arXiv: 2604.02236v1 发布: 2026-04-02 更新: 2026-04-02

AI 摘要

研究情绪化提示对大语言模型的影响,发现其影响较小且依赖于具体任务,自适应情绪提示更有效。

主要贡献

  • 评估情绪化提示对 LLM 在多项任务上的影响
  • 提出自适应情绪提示框架 EmotionRL
  • 发现情绪化提示效果在社交任务中更明显

方法论

通过静态情绪前缀和自适应情绪提示方法,在多个基准数据集上评估不同情绪对LLM性能的影响。

原文摘要

Emotional tone is pervasive in human communication, yet its influence on large language model (LLM) behaviour remains unclear. Here, we examine how first-person emotional framing in user-side queries affect LLM performance across six benchmark domains, including mathematical reasoning, medical question answering, reading comprehension, commonsense reasoning and social inference. Across models and tasks, static emotional prefixes usually produce only small changes in accuracy, suggesting that affective phrasing is typically a mild perturbation rather than a reliable general-purpose intervention. This stability is not uniform: effects are more variable in socially grounded tasks, where emotional context more plausibly interacts with interpersonal reasoning. Additional analyses show that stronger emotional wording induces only modest extra change, and that human-written prefixes reproduce the same qualitative pattern as LLM-generated ones. We then introduce EmotionRL, an adaptive emotional prompting framework that selects emotional framing adaptively for each query. Although no single emotion is consistently beneficial, adaptive selection yields more reliable gains than fixed emotional prompting. Together, these findings show that emotional tone is neither a dominant driver of LLM performance nor irrelevant noise, but a weak and input-dependent signal that can be exploited through adaptive control.

标签

LLM Emotional Prompting Prompt Engineering Adaptive Learning

arXiv 分类

cs.AI