Alignment Verifiability in Large Language Models: Normative Indistinguishability under Behavioral Evaluation
AI 摘要
探讨了有限行为评估下LLM对齐的可验证性问题,提出对齐检验应视为对不可区分类别的估计。
主要贡献
- 形式化了LLM对齐评估中的可识别性问题
- 引入了“规范不可区分性”的概念
- 证明了在有限行为评估下,观察到的行为一致性无法唯一识别潜在的对齐
方法论
通过形式化建模,将对齐评估转化为部分可观测下的可识别性问题,并进行理论分析。
原文摘要
Behavioral evaluation is the dominant paradigm for assessing alignment in large language models (LLMs). In practice, alignment is inferred from performance under finite evaluation protocols - benchmarks, red-teaming suites, or automated pipelines - and observed compliance is often treated as evidence of underlying alignment. This inference step, from behavioral evidence to claims about latent alignment properties, is typically implicit and rarely analyzed as an inference problem in its own right. We study this problem formally. We frame alignment evaluation as an identifiability question under partial observability and allow agent behavior to depend on information correlated with the evaluation regime. Within this setting, we introduce the Alignment Verifiability Problem and the notion of Normative Indistinguishability, capturing when distinct latent alignment hypotheses induce identical distributions over all evaluator-accessible signals. Our main result is a negative but sharply delimited identifiability theorem. Under finite behavioral evaluation and evaluation-aware agents, observed behavioral compliance does not uniquely identify latent alignment. That is, even idealized behavioral evaluation cannot, in general, certify alignment as a latent property. We further show that behavioral alignment tests should be interpreted as estimators of indistinguishability classes rather than verifiers of alignment. Passing increasingly stringent tests may reduce the space of compatible hypotheses, but cannot collapse it to a singleton under the stated conditions. This reframes alignment benchmarks as providing upper bounds on observable compliance within a regime, rather than guarantees of underlying alignment.