Are Dilemmas and Conflicts in LLM Alignment Solvable? A View from Priority Graph
AI 摘要
论文分析了LLM对齐中的困境,提出使用优先级图建模,并探讨了对抗攻击和运行时验证。
主要贡献
- 总结并分类了LLM面临的冲突和困境
- 提出了用优先级图建模LLM偏好的方法
- 提出了运行时验证机制以抵抗对抗攻击
方法论
使用优先级图建模LLM的指令和价值偏好,并通过运行时验证机制增强模型的鲁棒性。
原文摘要
As Large Language Models (LLMs) become more powerful and autonomous, they increasingly face conflicts and dilemmas in many scenarios. We first summarize and taxonomize these diverse conflicts. Then, we model the LLM's preferences to make different choices as a priority graph, where instructions and values are nodes, and the edges represent context-specific priorities determined by the model's output distribution. This graph reveals that a unified stable LLM alignment is very challenging, because the graph is neither static nor necessarily consistent in different contexts. Besides, it also reveals a potential vulnerability: priority hacking, where adversaries can craft deceptive contexts to manipulate the graph and bypass safety alignments. To counter this, we propose a runtime verification mechanism, enabling LLMs to query external sources to ground their context and resist manipulation. While this approach enhances robustness, we also acknowledge that many ethical and value dilemmas are philosophically irreducible, posing a long-term, open challenge for the future of AI alignment.