AI Agents 相关度: 8/10

Differentiable Modal Logic for Multi-Agent Diagnosis, Orchestration and Communication

Antonin Sulc
arXiv: 2602.12083v1 发布: 2026-02-12 更新: 2026-02-12

AI 摘要

提出可微模态逻辑,用于多智能体系统的诊断、协调和通信,实现神经符号调试。

主要贡献

  • 可解释的学习结构,信任和因果关系是显式参数
  • 通过可微公理进行知识注入,指导稀疏数据学习
  • 组合式多模态推理,结合认知、时间和道义约束

方法论

使用模态逻辑神经网络(MLNNs)实现可微模态逻辑(DML),从行为数据中学习信任网络、因果链和规则边界。

原文摘要

As multi-agent AI systems evolve from simple chatbots to autonomous swarms, debugging semantic failures requires reasoning about knowledge, belief, causality, and obligation, precisely what modal logic was designed to formalize. However, traditional modal logic requires manual specification of relationship structures that are unknown or dynamic in real systems. This tutorial demonstrates differentiable modal logic (DML), implemented via Modal Logical Neural Networks (MLNNs), enabling systems to learn trust networks, causal chains, and regulatory boundaries from behavioral data alone. We present a unified neurosymbolic debugging framework through four modalities: epistemic (who to trust), temporal (when events cause failures), deontic (what actions are permitted), and doxastic (how to interpret agent confidence). Each modality is demonstrated on concrete multi-agent scenarios, from discovering deceptive alliances in diplomacy games to detecting LLM hallucinations, with complete implementations showing how logical contradictions become learnable optimization objectives. Key contributions for the neurosymbolic community: (1) interpretable learned structures where trust and causality are explicit parameters, not opaque embeddings; (2) knowledge injection via differentiable axioms that guide learning with sparse data (3) compositional multi-modal reasoning that combines epistemic, temporal, and deontic constraints; and (4) practical deployment patterns for monitoring, active control and communication of multi-agent systems. All code provided as executable Jupyter notebooks.

标签

modal logic multi-agent system neurosymbolic AI

arXiv 分类

cs.AI cs.LO