LLM Reasoning 相关度: 8/10

DeepDFA: Injecting Temporal Logic in Deep Learning for Sequential Subsymbolic Applications

Elena Umili, Francesco Argenziano, Roberto Capobianco
arXiv: 2602.03486v1 发布: 2026-02-03 更新: 2026-02-03

AI 摘要

DeepDFA通过将时序逻辑注入深度学习,提升序列子符号应用性能。

主要贡献

  • 提出DeepDFA神经符号框架
  • 将时序逻辑(DFA)建模为可微分层
  • 在图像序列分类和非马尔可夫环境策略学习中验证有效性

方法论

DeepDFA将确定性有限自动机集成到神经网络架构中,实现符号知识注入,并利用可微分层进行训练。

原文摘要

Integrating logical knowledge into deep neural network training is still a hard challenge, especially for sequential or temporally extended domains involving subsymbolic observations. To address this problem, we propose DeepDFA, a neurosymbolic framework that integrates high-level temporal logic - expressed as Deterministic Finite Automata (DFA) or Moore Machines - into neural architectures. DeepDFA models temporal rules as continuous, differentiable layers, enabling symbolic knowledge injection into subsymbolic domains. We demonstrate how DeepDFA can be used in two key settings: (i) static image sequence classification, and (ii) policy learning in interactive non-Markovian environments. Across extensive experiments, DeepDFA outperforms traditional deep learning models (e.g., LSTMs, GRUs, Transformers) and novel neuro-symbolic systems, achieving state-of-the-art results in temporal knowledge integration. These results highlight the potential of DeepDFA to bridge subsymbolic learning and symbolic reasoning in sequential tasks.

标签

神经符号学习 时序逻辑 深度学习 序列建模

arXiv 分类

cs.LG cs.AI