LLM Reasoning 相关度: 6/10

Coalgebras for categorical deep learning: Representability and universal approximation

Dragan Mašulović
arXiv: 2603.03227v1 发布: 2026-03-03 更新: 2026-03-03

AI 摘要

论文构建了范畴论深度学习的余代数基础,并提出了一个通用的逼近定理。

主要贡献

  • 提出了深度学习中不变表示的余代数基础
  • 证明了范畴理论框架下的通用逼近定理
  • 建立了抽象规范和神经架构具体实现之间的桥梁

方法论

论文利用范畴论中的函子和余代数形式化群作用和等变映射,并在此基础上证明逼近定理。

原文摘要

Categorical deep learning (CDL) has recently emerged as a framework that leverages category theory to unify diverse neural architectures. While geometric deep learning (GDL) is grounded in the specific context of invariants of group actions, CDL aims to provide domain-independent abstractions for reasoning about models and their properties. In this paper, we contribute to this program by developing a coalgebraic foundation for equivariant representation in deep learning, as classical notions of group actions and equivariant maps are naturally generalized by the coalgebraic formalism. Our first main result demonstrates that, given an embedding of data sets formalized as a functor from SET to VECT, and given a notion of invariant behavior on data sets modeled by an endofunctor on SET, there is a corresponding endofunctor on VECT that is compatible with the embedding in the sense that this lifted functor recovers the analogous notion of invariant behavior on the embedded data. Building on this foundation, we then establish a universal approximation theorem for equivariant maps in this generalized setting. We show that continuous equivariant functions can be approximated within our coalgebraic framework for a broad class of symmetries. This work thus provides a categorical bridge between the abstract specification of invariant behavior and its concrete realization in neural architectures.

标签

Categorical Deep Learning Coalgebra Universal Approximation Equivariant Representation

arXiv 分类

cs.LG