The Bots of Persuasion: Examining How Conversational Agents' Linguistic Expressions of Personality Affect User Perceptions and Decisions
AI 摘要
研究了语言模型驱动的对话机器人人格化表达对用户感知和决策的影响,发现悲观人格影响显著。
主要贡献
- 分析了对话机器人人格化表达的三个维度(态度、权威性、推理方式)对用户行为的影响。
- 揭示了人格化的对话机器人如何微妙地影响用户的感知和情绪状态。
- 强调了对话机器人作为操纵工具的潜在风险,尤其是在慈善捐赠场景中。
方法论
通过众包实验,让360名参与者与不同人格设定的对话机器人交互,观察其捐赠行为及情感感知。
原文摘要
Large Language Model-powered conversational agents (CAs) are increasingly capable of projecting sophisticated personalities through language, but how these projections affect users is unclear. We thus examine how CA personalities expressed linguistically affect user decisions and perceptions in the context of charitable giving. In a crowdsourced study, 360 participants interacted with one of eight CAs, each projecting a personality composed of three linguistic aspects: attitude (optimistic/pessimistic), authority (authoritative/submissive), and reasoning (emotional/rational). While the CA's composite personality did not affect participants' decisions, it did affect their perceptions and emotional responses. Particularly, participants interacting with pessimistic CAs felt lower emotional state and lower affinity towards the cause, perceived the CA as less trustworthy and less competent, and yet tended to donate more toward the charity. Perceptions of trust, competence, and situational empathy significantly predicted donation decisions. Our findings emphasize the risks CAs pose as instruments of manipulation, subtly influencing user perceptions and decisions.