Evaluating randomized smoothing as a defense against adversarial attacks in trajectory prediction
AI 摘要
提出基于随机平滑的防御机制,提高轨迹预测模型对抗对抗攻击的鲁棒性,且不损失原始精度。
主要贡献
- 针对轨迹预测模型易受对抗攻击的问题,提出一种新的防御机制。
- 基于随机平滑方法,有效提升了轨迹预测模型的鲁棒性。
- 实验证明该方法在提升鲁棒性的同时,保持了非对抗环境下的准确性。
方法论
利用随机平滑技术,通过对输入增加随机扰动,使模型对对抗攻击更具鲁棒性,并在多个数据集上验证有效性。
原文摘要
Accurate and robust trajectory prediction is essential for safe and efficient autonomous driving, yet recent work has shown that even state-of-the-art prediction models are highly vulnerable to inputs being mildly perturbed by adversarial attacks. Although model vulnerabilities to such attacks have been studied, work on effective countermeasures remains limited. In this work, we develop and evaluate a new defense mechanism for trajectory prediction models based on randomized smoothing -- an approach previously applied successfully in other domains. We evaluate its ability to improve model robustness through a series of experiments that test different strategies of randomized smoothing. We show that our approach can consistently improve prediction robustness of multiple base trajectory prediction models in various datasets without compromising accuracy in non-adversarial settings. Our results demonstrate that randomized smoothing offers a simple and computationally inexpensive technique for mitigating adversarial attacks in trajectory prediction.