LLM Reasoning 相关度: 8/10

Kalman Linear Attention: Parallel Bayesian Filtering For Efficient Language Modelling and State Tracking

Vaisakh Shaj, Cameron Barker, Aidan Scannell, Andras Szecsenyi, Elliot J. Crowley, Amos Storkey
arXiv: 2602.10743v1 发布: 2026-02-11 更新: 2026-02-11

AI 摘要

提出了Kalman Linear Attention(KLA),一种并行贝叶斯滤波方法,提升语言建模和状态追踪的效率与表达能力。

主要贡献

  • 提出KLA层,一种新的神经序列建模单元
  • 将Kalman滤波器重参数化,实现并行计算
  • KLA在语言建模任务上表现优于其他SSM和GLA模型

方法论

将序列建模问题转化为概率问题,利用信息形式的Kalman滤波器进行并行贝叶斯滤波,构建KLA层。

原文摘要

State-space language models such as Mamba and gated linear attention (GLA) offer efficient alternatives to transformers due to their linear complexity and parallel training, but often lack the expressivity and robust state-tracking needed for complex reasoning. We address these limitations by reframing sequence modelling through a probabilistic lens, using Bayesian filters as a core primitive. While classical filters such as Kalman filters provide principled state estimation and uncertainty tracking, they are typically viewed as inherently sequential. We show that reparameterising the Kalman filter in information form enables its updates to be computed via an associative scan, allowing efficient parallel training. Building on this insight, we introduce the Kalman Linear Attention (KLA) layer, a neural sequence-modelling primitive that performs time-parallel probabilistic inference while maintaining explicit belief-state uncertainty. KLA offers strictly more expressive nonlinear updates and gating than GLA variants while retaining their computational advantages. On language modelling tasks, KLA matches or outperforms modern SSMs and GLAs across representative discrete token-manipulation and state-tracking benchmarks.

标签

Kalman Filter Linear Attention State-Space Model Bayesian Filtering Sequence Modeling

arXiv 分类

cs.LG