高级检索
    李文斌, 熊亚锟, 范祉辰, 邓波, 曹付元, 高阳. 持续学习的研究进展与趋势[J]. 计算机研究与发展. DOI: 10.7544/issn1000-1239.202220820
    引用本文: 李文斌, 熊亚锟, 范祉辰, 邓波, 曹付元, 高阳. 持续学习的研究进展与趋势[J]. 计算机研究与发展. DOI: 10.7544/issn1000-1239.202220820
    Li Wenbin, Xiong Yakun, Fan Zhichen, Deng Bo, Cao Fuyuan, Gao Yang. Advances and Trends of Continual Learning[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202220820
    Citation: Li Wenbin, Xiong Yakun, Fan Zhichen, Deng Bo, Cao Fuyuan, Gao Yang. Advances and Trends of Continual Learning[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202220820

    持续学习的研究进展与趋势

    Advances and Trends of Continual Learning

    • 摘要: 随着深度学习技术的发展与应用,特别是资源受限场景和数据安全场景对序列任务和数据进行快速学习需求的增多,持续学习逐渐成为机器学习领域关注的一个新热点. 不同于人类所具备的持续学习和迁移知识的能力,现有深度学习模型在序列学习过程中容易遭受灾难性遗忘的问题. 因此,如何在动态、非平稳的序列任务及流式数据中不断学习新知识、同时保留旧知识是持续学习研究的核心. 首先,通过对近年来持续学习国内外相关工作的调研与总结,将持续学习方法分为基于回放、基于约束、基于结构3大类,并对这3类方法做进一步的细分. 具体而言,根据所使用的样本来源将基于回放的方法细分为采样回放、生成回放、伪样本回放3类;根据训练约束的来源将基于约束的方法细分为参数约束、梯度约束、数据约束3类;根据对于模型结构的使用方式将基于结构的方法细分为参数隔离、模型拓展2类. 通过对比相关工作的创新点,对各类方法的优缺点进行总结. 其次,对国内外研究现状进行分析. 最后,针对持续学习与其他领域相结合的未来发展方向进行展望.

       

      Abstract: With the development and successful application of deep learning, continual learning has attracted increasing attention and has been a hot topic in the field of machine learning, especially in the resource-limited and data-security scenarios with the increasing requirements of quickly learning sequential tasks and data. Different from humans who enjoy the ability of continually learning and transferring knowledge, the existing deep learning models are prone to easily suffering from a catastrophic forgetting problem in a sequential learning process. Therefore, how to continually learn new knowledge and retain old knowledge at the same time on dynamic and non-stationary sequential task and streaming data, is the core of continual learning. Firstly, through the investigation and summary of the related work of continual learning at home and abroad in recent years, continual learning methods can be roughly divided into three categories: replay-based, constraint-based, and architecture-based. We will further subdivide these three types of methods. Specifically, the replay-based methods are subdivided into three categories: sample replay, generation replay, and pseudo-sample replay, according to the sample’s sources used; the constraint-based methods are subdivided into parameter constraints, gradient constraints, and data constraints, according to the constraint’s sources; the architecture-based methods are subdivided into two categories: parameter isolation and model expansion, according to how the model structure is used. By comparing the innovation points of related works, the advantages and disadvantages of various methods are summarized. Secondly, the research progress at home and abroad is analyzed. Finally, the future development direction of continual learning combined with other fields is simply prospected.

       

    /

    返回文章
    返回