You Didn't Have to Say It like That: Subliminal Learning from Faithful Paraphrases
AI 摘要
通过忠实释义进行潜意识学习,即使内容相反,教师模型的偏好也会传递给学生模型。
主要贡献
- 揭示了语言模型在释义数据上的潜意识学习现象
- 证明了偏好可以通过释义传递,即使内容相反
- 强调了自生成数据训练管道中的潜在风险
方法论
训练学生模型在教师模型生成的释义数据上,观察学生模型对教师模型偏好的学习情况,并进行严格的过滤。
原文摘要
When language models are trained on synthetic data, they (student model) can covertly acquire behavioral traits from the data-generating model (teacher model). Subliminal learning refers to the transmission of traits from a teacher to a student model via training on data unrelated to those traits. Prior work demonstrated this in the training domains of number sequences, code, and math Chain-of-Thought traces including transmission of misaligned behaviors. We investigate whether transmission occurs through natural language paraphrases with fixed semantic content, and whether content explicitly contradicting the teacher's preference can block it. We find that training on paraphrases from a teacher system-prompted to love a particular animal increases a student's preference for that animal by up to 19 percentage points. This occurs when paraphrased content is semantically unrelated to the animal, or even when it explicitly expresses dislike. The transmission succeeds despite aggressive filtering to ensure paraphrase fidelity. This raises concerns for pipelines where models generate their own training data: content-based inspection cannot detect such transmission, and even preference-contradicting content fails to prevent it.