LLM Reasoning 相关度: 7/10

Inhibitory normalization of error signals improves learning in neural circuits

Roy Henha Eyono, Daniel Levenstein, Arna Ghosh, Jonathan Cornford, Blake Richards
arXiv: 2603.17676v1 发布: 2026-03-18 更新: 2026-03-18

AI 摘要

抑制性归一化误差信号能显著提升神经网络在图像识别任务中的学习性能。

主要贡献

  • 揭示了抑制介导的归一化在神经网络学习中的作用机制。
  • 证明了对反向传播误差进行归一化处理能显著提高学习性能。
  • 强调了生物神经回路中学习信号归一化的重要性。

方法论

通过构建具有兴奋性和抑制性神经元的人工神经网络,并在可变光照条件下的图像识别任务上进行训练和测试。

原文摘要

Normalization is a critical operation in neural circuits. In the brain, there is evidence that normalization is implemented via inhibitory interneurons and allows neural populations to adjust to changes in the distribution of their inputs. In artificial neural networks (ANNs), normalization is used to improve learning in tasks that involve complex input distributions. However, it is unclear whether inhibition-mediated normalization in biological neural circuits also improves learning. Here, we explore this possibility using ANNs with separate excitatory and inhibitory populations trained on an image recognition task with variable luminosity. We find that inhibition-mediated normalization does not improve learning if normalization is applied only during inference. However, when this normalization is extended to include back-propagated errors, performance improves significantly. These results suggest that if inhibition-mediated normalization improves learning in the brain, it additionally requires the normalization of learning signals.

标签

神经网络 抑制性归一化 误差反向传播 图像识别

arXiv 分类

q-bio.NC cs.AI cs.LG