AI Agents 相关度: 6/10

RNM-TD3: N:M Semi-structured Sparse Reinforcement Learning From Scratch

Isam Vrce, Andreas Kassler, Gökçe Aydos
arXiv: 2602.14578v1 发布: 2026-02-16 更新: 2026-02-16

AI 摘要

提出RNM-TD3算法,在TD3中引入N:M结构化稀疏,在保证性能的同时提高硬件加速潜力。

主要贡献

  • 首次研究RL中的N:M结构化稀疏
  • 提出RNM-TD3算法,在连续控制任务中表现优异
  • 实验证明在较高稀疏度下仍具有竞争力

方法论

在TD3算法的神经网络中,强制执行行级别的N:M结构化稀疏训练,并进行实验验证。

原文摘要

Sparsity is a well-studied technique for compressing deep neural networks (DNNs) without compromising performance. In deep reinforcement learning (DRL), neural networks with up to 5% of their original weights can still be trained with minimal performance loss compared to their dense counterparts. However, most existing methods rely on unstructured fine-grained sparsity, which limits hardware acceleration opportunities due to irregular computation patterns. Structured coarse-grained sparsity enables hardware acceleration, yet typically degrades performance and increases pruning complexity. In this work, we present, to the best of our knowledge, the first study on N:M structured sparsity in RL, which balances compression, performance, and hardware efficiency. Our framework enforces row-wise N:M sparsity throughout training for all networks in off-policy RL (TD3), maintaining compatibility with accelerators that support N:M sparse matrix operations. Experiments on continuous-control benchmarks show that RNM-TD3, our N:M sparse agent, outperforms its dense counterpart at 50%-75% sparsity (e.g., 2:4 and 1:4), achieving up to a 14% increase in performance at 2:4 sparsity on the Ant environment. RNM-TD3 remains competitive even at 87.5% sparsity (1:8), while enabling potential training speedups.

标签

强化学习 稀疏神经网络 TD3 N:M结构化稀疏

arXiv 分类

cs.LG cs.AR