The Weight of a Bit: EMFI Sensitivity Analysis of Embedded Deep Learning Models
AI 摘要
研究不同数值表示对嵌入式深度学习模型抗电磁故障注入攻击能力的影响。
主要贡献
- 首次全面评估数值表示对EMFI攻击的影响
- 对比了浮点数和整数表示的抗攻击能力
- 验证了整数表示比浮点数表示具有更好的抗攻击性
方法论
在嵌入式芯片上部署图像分类模型,利用EMFI平台注入故障,评估模型准确率下降程度。
原文摘要
Fault injection attacks on embedded neural network models have been shown as a potent threat. Numerous works studied resilience of models from various points of view. As of now, there is no comprehensive study that would evaluate the influence of number representations used for model parameters against electromagnetic fault injection (EMFI) attacks. In this paper, we investigate how four different number representations influence the success of an EMFI attack on embedded neural network models. We chose two common floating-point representations (32-bit, and 16-bit), and two integer representations (8-bit, and 4-bit). We deployed four common image classifiers, ResNet-18, ResNet-34, ResNet-50, and VGG-11, on an embedded memory chip, and utilized a low-cost EMFI platform to trigger faults. Our results show that while floating-point representations exhibit almost a complete degradation in accuracy (Top-1 and Top-5) after a single fault injection, integer representations offer better resistance overall. Especially, when considering the the 8-bit representation on a relatively large network (VGG-11), the Top-1 accuracies stay at around 70% and the Top-5 at around 90%.