On the Sensitivity of Firing Rate-Based Federated Spiking Neural Networks to Differential Privacy
AI 摘要
研究差分隐私对基于脉冲神经网络的联邦学习中神经元放电率的影响。
主要贡献
- 分析了差分隐私机制对SNN放电率的影响。
- 揭示了隐私预算和梯度裁剪对联邦学习的影响。
- 提供了隐私保护的联邦学习的实践指导。
方法论
通过在语音识别任务上进行消融实验,分析隐私机制对放电率统计的影响。
原文摘要
Federated Neuromorphic Learning (FNL) enables energy-efficient and privacy-preserving learning on devices without centralizing data. However, real-world deployments require additional privacy mechanisms that can significantly alter training signals. This paper analyzes how Differential Privacy (DP) mechanisms, specifically gradient clipping and noise injection, perturb firing-rate statistics in Spiking Neural Networks (SNNs) and how these perturbations are propagated to rate-based FNL coordination. On a speech recognition task under non-IID settings, ablations across privacy budgets and clipping bounds reveal systematic rate shifts, attenuated aggregation, and ranking instability during client selection. Moreover, we relate these shifts to sparsity and memory indicators. Our findings provide actionable guidance for privacy-preserving FNL, specifically regarding the balance between privacy strength and rate-dependent coordination.