Beyond Rewards in Reinforcement Learning for Cyber Defence
AI 摘要
研究奖励函数结构对网络安全强化学习Agent性能的影响,发现稀疏奖励更有效。
主要贡献
- 提出一种评估奖励函数有效性的新方法
- 评估了稀疏和密集奖励在网络安全场景下的影响
- 证明了精心设计的稀疏奖励能够提高Agent的可靠性和安全性
方法论
在网络安全环境中,使用不同的稀疏和密集奖励函数训练强化学习Agent,并进行对比评估。
原文摘要
Recent years have seen an explosion of interest in autonomous cyber defence agents trained to defend computer networks using deep reinforcement learning. These agents are typically trained in cyber gym environments using dense, highly engineered reward functions which combine many penalties and incentives for a range of (un)desirable states and costly actions. Dense rewards help alleviate the challenge of exploring complex environments but risk biasing agents towards suboptimal and potentially riskier solutions, a critical issue in complex cyber environments. We thoroughly evaluate the impact of reward function structure on learning and policy behavioural characteristics using a variety of sparse and dense reward functions, two well-established cyber gyms, a range of network sizes, and both policy gradient and value-based RL algorithms. Our evaluation is enabled by a novel ground truth evaluation approach which allows directly comparing between different reward functions, illuminating the nuanced inter-relationships between rewards, action space and the risks of suboptimal policies in cyber environments. Our results show that sparse rewards, provided they are goal aligned and can be encountered frequently, uniquely offer both enhanced training reliability and more effective cyber defence agents with lower-risk policies. Surprisingly, sparse rewards can also yield policies that are better aligned with cyber defender goals and make sparing use of costly defensive actions without explicit reward-based numerical penalties.