Is my model perplexed for the right reason? Contrasting LLMs' Benchmark Behavior with Token-Level Perplexity
AI 摘要
论文提出使用token-level perplexity分析LLM是否基于正确的语言线索进行预测,揭示模型可能依赖非预期启发式。
主要贡献
- 提出一种基于token-level perplexity的LLM可解释性框架
- 对比最小句子对,分析模型对关键linguistic cue的依赖程度
- 实验结果表明LLM会依赖非预期的语言学启发式
方法论
通过比较在关键token上不同的最小句子对的perplexity分布,分析LLM对语言线索的依赖性。
原文摘要
Standard evaluations of Large language models (LLMs) focus on task performance, offering limited insight into whether correct behavior reflects appropriate underlying mechanisms and risking confirmation bias. We introduce a simple, principled interpretability framework based on token-level perplexity to test whether models rely on linguistically relevant cues. By comparing perplexity distributions over minimal sentence pairs differing in one or a few `pivotal' tokens, our method enables precise, hypothesis-driven analysis without relying on unstable feature-attribution techniques. Experiments on controlled linguistic benchmarks with several open-weight LLMs show that, while linguistically important tokens influence model behavior, they never fully explain perplexity shifts, revealing that models rely on heuristics other than the expected linguistic ones.