SHAPCA: Consistent and Interpretable Explanations for Machine Learning Models on Spectroscopy Data
AI 摘要
SHAPCA结合PCA降维和SHAP解释光谱数据机器学习模型,提供一致且可解释的特征重要性分析。
主要贡献
- 提出SHAPCA框架,用于解释光谱数据机器学习模型
- 结合PCA和SHAP,在原始输入空间提供解释
- 提高了特征重要性解释的一致性
方法论
使用PCA进行降维,然后使用SHAP值进行事后解释,从而分析模型的全局和局部行为。
原文摘要
In recent years, machine learning models have been increasingly applied to spectroscopic datasets for chemical and biomedical analysis. For their successful adoption, particularly in clinical and safety-critical settings, professionals and researchers must be able to understand and trust the reasoning behind model predictions. However, the inherently high dimensionality and strong collinearity of spectroscopy data pose a fundamental challenge to model explainability. These properties not only complicate model training but also undermine the stability and consistency of explanations, leading to fluctuations in feature importance across repeated training runs. Feature extraction techniques have been used to reduce the input dimensionality; these new features hinder the connection between the prediction and the original signal. This study proposes SHAPCA, an explainable machine learning pipeline that combines Principal Component Analysis (for dimensionality reduction) and Shapely Additive exPlanations (for post hoc explanation) to provide explanations in the original input space, which a practitioner can interpret and link back to the biological components. The proposed framework enables analysis from both global and local perspectives, revealing the spectral bands that drive overall model behaviour as well as the instance-specific features that influence individual predictions. Numerical analysis demonstrated the interpretability of the results and greater consistency across different runs.