Multimodal Learning 相关度: 8/10

A Neuro-Symbolic System for Interpretable Multimodal Physiological Signals Integration in Human Fatigue Detection

Mohammadreza Jamalifard, Yaxiong Lei, Parasto Azizinezhad, Javier Fumanal-Idocin, Javier Andreu-Perez
arXiv: 2603.24358v1 发布: 2026-03-25 更新: 2026-03-25

AI 摘要

提出了一种神经符号系统,用于可解释的多模态生理信号融合,以检测人类疲劳。

主要贡献

  • 提出了一种神经符号架构,学习可解释的生理概念
  • 使用可微近似推理规则结合概念
  • 引入概念保真度指标用于评估模型

方法论

使用基于注意力的编码器学习眼动追踪和fNIRS信号的生理概念,并通过可微分规则进行融合。

原文摘要

We propose a neuro-symbolic architecture that learns four interpretable physiological concepts, oculomotor dynamics, gaze stability, prefrontal hemodynamics, and multimodal, from eye-tracking and neural hemodynamics, functional near-infrared spectroscopy, (fNIRS) windows using attention-based encoders, and combines them with differentiable approximate reasoning rules using learned weights and soft thresholds, to address both rigid hand-crafted rules and the lack of subject-level alignment diagnostics. We apply this system to fatigue classification from multimodal physiological signals, a domain that requires models that are accurate and interpretable, with internal reasoning that can be inspected for safety-critical use. In leave-one-subject-out evaluation on 18 participants (560 samples), the method achieves 72.1% +/- 12.3% accuracy, comparable to tuned baselines while exposing concept activations and rule firing strengths. Ablations indicate gains from participant-specific calibration (+5.2 pp), a modest drop without the fNIRS concept (-1.2 pp), and slightly better performance with Lukasiewicz operators than product (+0.9 pp). We also introduce concept fidelity, an offline per-subject audit metric from held-out labels, which correlates strongly with per-subject accuracy (r=0.843, p < 0.0001).

标签

神经符号系统 多模态学习 疲劳检测 生理信号

arXiv 分类

cs.HC cs.LG