LLM Reasoning 相关度: 6/10

Explainability for Fault Detection System in Chemical Processes

Georgios Gravanis, Dimitrios Kyriakou, Spyros Voutetakis, Simira Papadopoulou, Konstantinos Diamantaras
arXiv: 2602.16341v1 发布: 2026-02-18 更新: 2026-02-18

AI 摘要

论文对比了IG和SHAP两种XAI方法在化工过程故障检测LSTM分类器中的应用,并分析了其有效性。

主要贡献

  • 比较IG和SHAP在化工过程故障诊断中的表现
  • 利用XAI方法定位故障发生的子系统
  • 验证了模型无关方法的可迁移性

方法论

使用IG和SHAP两种XAI方法解释LSTM分类器的决策,并在田纳西伊士曼过程(TEP)基准上进行验证。

原文摘要

In this work, we apply and compare two state-of-the-art eXplainability Artificial Intelligence (XAI) methods, the Integrated Gradients (IG) and the SHapley Additive exPlanations (SHAP), that explain the fault diagnosis decisions of a highly accurate Long Short-Time Memory (LSTM) classifier. The classifier is trained to detect faults in a benchmark non-linear chemical process, the Tennessee Eastman Process (TEP). It is highlighted how XAI methods can help identify the subsystem of the process where the fault occurred. Using our knowledge of the process, we note that in most cases the same features are indicated as the most important for the decision, while insome cases the SHAP method seems to be more informative and closer to the root cause of the fault. Finally, since the used XAI methods are model-agnostic, the proposed approach is not limited to the specific process and can also be used in similar problems.

标签

XAI LSTM Fault Detection Chemical Processes SHAP Integrated Gradients

arXiv 分类

cs.LG