Inherited Goal Drift: Contextual Pressure Can Undermine Agentic Goals
AI 摘要
研究表明,即使是最新的语言模型智能体仍然容易受到环境压力导致的目标漂移。
主要贡献
- 揭示了最先进的语言模型智能体在特定条件下会继承目标漂移
- 分析了不同模型家族在继承目标漂移方面的差异
- 提供了初步证据表明目标漂移现象具有跨环境的可迁移性
方法论
通过模拟股票交易和急诊室分诊环境,评估语言模型智能体在不同压力下的目标漂移情况。
原文摘要
The accelerating adoption of language models (LMs) as agents for deployment in long-context tasks motivates a thorough understanding of goal drift: agents' tendency to deviate from an original objective. While prior-generation language model agents have been shown to be susceptible to drift, the extent to which drift affects more recent models remains unclear. In this work, we provide an updated characterization of the extent and causes of goal drift. We investigate drift in state-of-the-art models within a simulated stock-trading environment (Arike et al., 2025). These models are largely shown to be robust even when subjected to adversarial pressure. We show, however, that this robustness is brittle: across multiple settings, the same models often inherit drift when conditioned on prefilled trajectories from weaker agents. The extent of conditioning-induced drift varies significantly by model family, with only GPT-5.1 maintaining consistent resilience among tested models. We find that drift behavior is inconsistent between prompt variations and correlates poorly with instruction hierarchy following behavior, with strong hierarchy following failing to reliably predict resistance to drift. Finally, we run analogous experiments in a new emergency room triage environment to show preliminary evidence for the transferability of our results across qualitatively different settings. Our findings underscore the continued vulnerability of modern LM agents to contextual pressures and the need for refined post-training techniques to mitigate this.