Distilling Formal Logic into Neural Spaces: A Kernel Alignment Approach for Signal Temporal Logic
AI 摘要
将形式逻辑提炼到神经空间,用核对齐方法高效学习信号时序逻辑的连续神经表示。
主要贡献
- 提出一种将符号鲁棒性核提炼到Transformer编码器中的方法
- 引入连续、核加权的几何对齐目标函数
- 实现高效、可扩展的神经符号推理和公式重构
方法论
使用teacher-student模型,将符号鲁棒性核提炼到Transformer编码器中,并使用几何对齐目标函数进行监督。
原文摘要
We introduce a framework for learning continuous neural representations of formal specifications by distilling the geometry of their semantics into a latent space. Existing approaches rely either on symbolic kernels -- which preserve behavioural semantics but are computationally prohibitive, anchor-dependent, and non-invertible -- or on syntax-based neural embeddings that fail to capture underlying structures. Our method bridges this gap: using a teacher-student setup, we distill a symbolic robustness kernel into a Transformer encoder. Unlike standard contrastive methods, we supervise the model with a continuous, kernel-weighted geometric alignment objective that penalizes errors in proportion to their semantic discrepancies. Once trained, the encoder produces embeddings in a single forward pass, effectively mimicking the kernel's logic at a fraction of its computational cost. We apply our framework to Signal Temporal Logic (STL), demonstrating that the resulting neural representations faithfully preserve the semantic similarity of STL formulae, accurately predict robustness and constraint satisfaction, and remain intrinsically invertible. Our proposed approach enables highly efficient, scalable neuro-symbolic reasoning and formula reconstruction without repeated kernel computation at runtime.