• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Cheng Yujia, Tao Wei, Liu Yuxiang, Tao Qing. Optimal Individual Convergence Rate of the Heavy-Ball-Based Momentum Methods[J]. Journal of Computer Research and Development, 2019, 56(8): 1686-1694. DOI: 10.7544/issn1000-1239.2019.20190167
Citation: Cheng Yujia, Tao Wei, Liu Yuxiang, Tao Qing. Optimal Individual Convergence Rate of the Heavy-Ball-Based Momentum Methods[J]. Journal of Computer Research and Development, 2019, 56(8): 1686-1694. DOI: 10.7544/issn1000-1239.2019.20190167

Optimal Individual Convergence Rate of the Heavy-Ball-Based Momentum Methods

More Information
  • Published Date: July 31, 2019
  • The momentum method is widely used as an acceleration technique to improve the convergence of the first-order gradient algorithms. So far, the momentum methods discussed in most literatures are only limited to the accelerated method proposed by Nesterov, but the Heavy-ball momentum method proposed by Polyak is seldom studied. In particular, in the case of non-smooth objective functions, the individual optimal convergence of Nesterov accelerated methods has been derived, and it has high performance in solving sparse optimization problems. In contrast, while it has been proved that the Heavy-ball momentum method has an optimal convergence rate,it is only in terms of the averaged outputs. To our best knowledge, whether it has optimal individual convergence or not still remains unknown. In this paper, we focus on the non-smooth optimizations. We prove that the Heavy-ball momentum method achieves the optimal individual convergence by skillfully selecting the time-varying step-size, which indicates that Heavy-ball momentum is an efficient acceleration strategy for the individual convergence of the projected subgradient methods. As an application, the constrained hinge loss function optimization problems within an l\-1-norm ball are considered. In comparison with other optimization algorithms, the experiments demonstrate the correctness of our theoretical analysis and performance of the proposed algorithms in keeping the sparsity.
  • Cited by

    Periodical cited type(10)

    1. 陶蔚,陇盛,刘鑫,胡亚豪,黄金才. 深度学习步长自适应动量优化方法研究综述. 小型微型计算机系统. 2025(02): 257-265 .
    2. 张泽东,陇盛,鲍蕾,陶卿. 基于AdaBelief的Heavy-Ball动量方法. 模式识别与人工智能. 2022(02): 106-115 .
    3. 陇盛,陶蔚,张泽东,陶卿. 基于AdaGrad的自适应NAG方法及其最优个体收敛性. 软件学报. 2022(04): 1231-1243 .
    4. 曲军谊. 基于对偶平均的动量方法研究综述. 计算机与数字工程. 2022(11): 2443-2448 .
    5. 曲军谊,鲍蕾,陶卿. 非光滑凸问题投影型对偶平均优化方法的个体收敛性. 模式识别与人工智能. 2021(01): 25-32 .
    6. 黄鉴之,陇盛,陶卿. 自适应策略下Heavy-Ball型动量法的最优个体收敛速率. 模式识别与人工智能. 2021(02): 137-145 .
    7. 李兴怡,岳洋. 梯度下降算法研究综述. 软件工程. 2020(02): 1-4 .
    8. 丁成诚,陶蔚,陶卿. 一种三参数统一化动量方法及其最优收敛速率. 计算机研究与发展. 2020(08): 1571-1580 . 本站查看
    9. 鲁淑霞,蔡莲香,张罗幻. 基于动量加速零阶减小方差的鲁棒支持向量机. 计算机工程. 2020(12): 88-95+104 .
    10. 黄鉴之,丁成诚,陶蔚,陶卿. 非光滑凸情形Adam型算法的最优个体收敛速率. 智能系统学报. 2020(06): 1140-1146 .

    Other cited types(4)

Catalog

    Article views (1423) PDF downloads (482) Cited by(14)
    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return