Concept-based explanations of Segmentation and Detection models in Natural Disaster Management
AI 摘要
提出一个针对自然灾害管理中分割和检测模型的概念解释框架,提升模型透明度和可信度。
主要贡献
- 扩展LRP解释到PIDNet的融合层
- 应用PCX提供概念层面的局部和全局解释
- 在资源受限平台上的实时推理能力
方法论
使用扩展的LRP解释PIDNet,并应用PCX解释洪水分割和车辆检测结果,提供概念层面的可视化解释。
原文摘要
Deep learning models for flood and wildfire segmentation and object detection enable precise, real-time disaster localization when deployed on embedded drone platforms. However, in natural disaster management, the lack of transparency in their decision-making process hinders human trust required for emergency response. To address this, we present an explainability framework for understanding flood segmentation and car detection predictions on the widely used PIDNet and YOLO architectures. More specifically, we introduce a novel redistribution strategy that extends Layer-wise Relevance Propagation (LRP) explanations for sigmoid-gated element-wise fusion layers. This extension allows LRP relevances to flow through the fusion modules of PIDNet, covering the entire computation graph back to the input image. Furthermore, we apply Prototypical Concept-based Explanations (PCX) to provide both local and global explanations at the concept level, revealing which learned features drive the segmentation and detection of specific disaster semantic classes. Experiments on a publicly available flood dataset show that our framework provides reliable and interpretable explanations while maintaining near real-time inference capabilities, rendering it suitable for deployment on resource-constrained platforms, such as Unmanned Aerial Vehicles (UAVs).