LLM Reasoning 相关度: 8/10

Step-resolved data attribution for looped transformers

Georgios Kaissis, David Mildenberger, Juan Felipe Gomez, Martin J. Menten, Eleni Triantafillou
arXiv: 2602.10097v1 发布: 2026-02-10 更新: 2026-02-10

AI 摘要

针对循环Transformer,论文提出Step-Decomposed Influence方法,分析训练数据对循环推理过程的影响。

主要贡献

  • 提出Step-Decomposed Influence (SDI)方法
  • TensorSketch加速SDI计算
  • 在循环GPT模型和算法推理任务上验证了SDI的有效性

方法论

将TracIn分解为循环长度的轨迹,通过展开循环计算图并将影响归因于特定循环迭代,同时采用TensorSketch加速。

原文摘要

We study how individual training examples shape the internal computation of looped transformers, where a shared block is applied for $τ$ recurrent iterations to enable latent reasoning. Existing training-data influence estimators such as TracIn yield a single scalar score that aggregates over all loop iterations, obscuring when during the recurrent computation a training example matters. We introduce \textit{Step-Decomposed Influence (SDI)}, which decomposes TracIn into a length-$τ$ influence trajectory by unrolling the recurrent computation graph and attributing influence to specific loop iterations. To make SDI practical at transformer scale, we propose a TensorSketch implementation that never materialises per-example gradients. Experiments on looped GPT-style models and algorithmic reasoning tasks show that SDI scales excellently, matches full-gradient baselines with low error and supports a broad range of data attribution and interpretability tasks with per-step insights into the latent reasoning process.

标签

Transformer 循环神经网络 可解释性 数据溯源 影响力函数

arXiv 分类

cs.LG cs.AI