LLM Reasoning 相关度: 9/10

EmoLLM: Appraisal-Grounded Cognitive-Emotional Co-Reasoning in Large Language Models

Yifei Zhang, Mingyang Li, Henry Gao, Liang Zhao
arXiv: 2603.16553v1 发布: 2026-03-17 更新: 2026-03-17

AI 摘要

EmoLLM通过情境评估和认知情感共推理框架,提升大语言模型在对话中的情感智能。

主要贡献

  • 提出基于情境评估理论的EmoLLM框架
  • 设计了显式的情境评估推理图(ARG)进行中间推理
  • 使用强化学习在多轮角色扮演环境中训练模型

方法论

构建情境评估推理图,通过强化学习和反向视角推理训练模型,优化情感状态输出和回复质量。

原文摘要

Large language models (LLMs) demonstrate strong cognitive intelligence (IQ), yet many real-world interactions also require emotional intelligence (EQ) to produce responses that are both factually reliable and emotionally appropriate. In settings such as emotional support, technical assistance, and consultation, effective dialogue depends on how situations are appraised with respect to the user's needs, goals, and coping capacity. Inspired by appraisal theory, we propose EmoLLM, an appraisal-grounded framework for IQ/EQ co-reasoning in dialogue. EmoLLM uses an explicit Appraisal Reasoning Graph (ARG) to structure intermediate reasoning over contextual facts, inferred user needs, appraisal dimensions, emotional states, and response strategies before generating a reply. We train EmoLLM in a multi-turn role-play environment with reinforcement learning, where reverse-perspective reasoning provides reward signals based on predicted user-side consequences of responses. Across diverse dialogue settings, EmoLLM improves emotional state outcomes and response quality over strong baselines while preserving strong factual reliability.

标签

情感智能 情境评估 推理 强化学习 对话系统

arXiv 分类

cs.CL cs.AI