LLM Reasoning 相关度: 6/10

Accelerated Predictive Coding Networks via Direct Kolen-Pollack Feedback Alignment

Davide Casnici, Martin Lefebvre, Justin Dauwels, Charlotte Frenkel
arXiv: 2602.15571v1 发布: 2026-02-17 更新: 2026-02-17

AI 摘要

提出DKP-PC算法,通过直接反馈对齐加速预测编码网络的训练,提高效率和可扩展性。

主要贡献

  • 提出DKP-PC算法,解决预测编码中的反馈延迟和指数衰减问题
  • 引入可学习的反馈连接,实现输出层到所有隐藏层的直接误差传递
  • 实验证明DKP-PC算法在性能、延迟和计算性能上优于标准PC算法

方法论

结合直接反馈对齐和Kolen-Pollack算法,构建可学习的反馈连接,实现误差信号的直接传递,降低时间复杂度。

原文摘要

Predictive coding (PC) is a biologically inspired algorithm for training neural networks that relies only on local updates, allowing parallel learning across layers. However, practical implementations face two key limitations: error signals must still propagate from the output to early layers through multiple inference-phase steps, and feedback decays exponentially during this process, leading to vanishing updates in early layers. We propose direct Kolen-Pollack predictive coding (DKP-PC), which simultaneously addresses both feedback delay and exponential decay, yielding a more efficient and scalable variant of PC while preserving update locality. Leveraging direct feedback alignment and direct Kolen-Pollack algorithms, DKP-PC introduces learnable feedback connections from the output layer to all hidden layers, establishing a direct pathway for error transmission. This yields an algorithm that reduces the theoretical error propagation time complexity from O(L), with L being the network depth, to O(1), removing depth-dependent delay in error signals. Moreover, empirical results demonstrate that DKP-PC achieves performance at least comparable to, and often exceeding, that of standard PC, while offering improved latency and computational performance, supporting its potential for custom hardware-efficient implementations.

标签

预测编码 反馈对齐 神经网络 深度学习 优化

arXiv 分类

cs.LG