Momentum LMS Theory beyond Stationarity: Stability, Tracking, and Regret
AI 摘要
论文分析了非平稳环境下动量最小均方算法(MLMS)的稳定性和跟踪性能,并给出了理论界限。
主要贡献
- 推导了时变随机线性系统下MLMS算法的跟踪性能和遗憾界限
- 提出了针对MLMS算法稳定性的二阶时变随机向量差分方程
- 通过实验验证了MLMS在非平稳环境下的快速适应性和鲁棒跟踪能力
方法论
通过建立MLMS算法的二阶随机向量差分方程,并进行理论分析和实验验证,研究其在非平稳环境下的性能。
原文摘要
In large-scale data processing scenarios, data often arrive in sequential streams generated by complex systems that exhibit drifting distributions and time-varying system parameters. This nonstationarity challenges theoretical analysis, as it violates classical assumptions of i.i.d. (independent and identically distributed) samples, necessitating algorithms capable of real-time updates without expensive retraining. An effective approach should process each sample in a single pass, while maintaining computational and memory complexities independent of the data stream length. Motivated by these challenges, this paper investigates the Momentum Least Mean Squares (MLMS) algorithm as an adaptive identification tool, leveraging its computational simplicity and online processing capabilities. Theoretically, we derive tracking performance and regret bounds for the MLMS in time-varying stochastic linear systems under various practical conditions. Unlike classical LMS, whose stability can be characterized by first-order random vector difference equations, MLMS introduces an additional dynamical state due to momentum, leading to second-order time-varying random vector difference equations whose stability analysis hinges on more complicated products of random matrices, which poses a substantially challenging problem to resolve. Experiments on synthetic and real-world data streams demonstrate that MLMS achieves rapid adaptation and robust tracking, in agreement with our theoretical results especially in nonstationary settings, highlighting its promise for modern streaming and online learning applications.