LLM Reasoning 相关度: 7/10

Inner Loop Inference for Pretrained Transformers: Unlocking Latent Capabilities Without Training

Jonathan Lys, Vincent Gripon, Bastien Pasdeloup, Lukas Mauch, Fabien Cardinaux, Ghouthi Boukli Hacene
arXiv: 2602.14759v1 发布: 2026-02-16 更新: 2026-02-16

AI 摘要

通过在推理时循环重用Transformer模块,提升预训练语言模型的性能。

主要贡献

  • 提出了推理时内循环方法
  • 无需训练即可提升性能
  • 分析了潜在表征的演化过程

方法论

在推理阶段,重复应用预训练Transformer的特定模块范围,实现对潜在表征的迭代优化。

原文摘要

Deep Learning architectures, and in particular Transformers, are conventionally viewed as a composition of layers. These layers are actually often obtained as the sum of two contributions: a residual path that copies the input and the output of a Transformer block. As a consequence, the inner representations (i.e. the input of these blocks) can be interpreted as iterative refinement of a propagated latent representation. Under this lens, many works suggest that the inner space is shared across layers, meaning that tokens can be decoded at early stages. Mechanistic interpretability even goes further by conjecturing that some layers act as refinement layers. Following this path, we propose inference-time inner looping, which prolongs refinement in pretrained off-the-shelf language models by repeatedly re-applying a selected block range. Across multiple benchmarks, inner looping yields modest but consistent accuracy improvements. Analyses of the resulting latent trajectories suggest more stable state evolution and continued semantic refinement. Overall, our results suggest that additional refinement can be obtained through simple test-time looping, extending computation in frozen pretrained models.

标签

Transformer Inference Looping Latent Representation

arXiv 分类

cs.LG cs.AI