高级检索
    刘壮, 董子宸, 董宜琳, 尚家名, 张帆, 陈雨然, 楼佩妍, 孙欣然, 王昱, 赵军, WayneLin. 图终身学习:综述[J]. 计算机研究与发展. DOI: 10.7544/issn1000-1239.202440204
    引用本文: 刘壮, 董子宸, 董宜琳, 尚家名, 张帆, 陈雨然, 楼佩妍, 孙欣然, 王昱, 赵军, WayneLin. 图终身学习:综述[J]. 计算机研究与发展. DOI: 10.7544/issn1000-1239.202440204
    Liu Zhuang, Dong Zichen, Dong Yilin, Shang Jiaming, Zhang Fan, Chen Yuran, Lou Peiyan, Sun Xinran, Wang Yu, Zhao Jun, Wayne Lin. Lifelong Graph Learning: A Comprehensive Review[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440204
    Citation: Liu Zhuang, Dong Zichen, Dong Yilin, Shang Jiaming, Zhang Fan, Chen Yuran, Lou Peiyan, Sun Xinran, Wang Yu, Zhao Jun, Wayne Lin. Lifelong Graph Learning: A Comprehensive Review[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440204

    图终身学习:综述

    Lifelong Graph Learning: A Comprehensive Review

    • 摘要: 图终身学习(lifelong graph learning,LGL)是一个新兴领域,旨在实现对图结构数据的持续学习,以解决现有任务上的灾难性遗忘问题,并使得顺序更新的模型能够适应新出现的图任务. 尽管LGL展现出良好的学习能力,但如何持续提高其性能仍然是一个至关重要的问题. 为填补现有研究对这一方面的空白,对最近在LGL领域的研究进行了全面调查和总结. 首先,重新分类了LGL的现有方法,重点关注克服灾难性遗忘的方法. 随后,系统地分析了这些方法的优缺点,并探讨了实现持续性能提升的潜在解决方案. 该研究着重于如何在持续学习的过程中避免对旧任务的遗忘,同时快速适应新任务的挑战. 最后,还就LGL的未来发展方向进行了讨论,涵盖了其在应用领域、开放性问题等方面的潜在影响,并具体分析了这些方向对持续性能改进的潜在影响. 这些讨论将有助于指导未来LGL研究的方向,推动这一领域的进一步发展与应用.

       

      Abstract: Lifelong Graph Learning (LGL) is an emerging field aimed at achieving continuous learning on graph-structured data, addressing the catastrophic forgetting issue in existing tasks and enabling models to adapt to newly emerging graph tasks in a sequential manner. Despite demonstrating strong learning capabilities, improving LGL's sustained performance remains a crucial challenge. To address the gaps in existing research, we provide a comprehensive survey and summary of recent developments in the field of LGL. Firstly, we reclassify existing methods of LGL, focusing particularly on approaches to overcome catastrophic forgetting. Secondly, we systematically analyze the strengths and weaknesses of these methods, and discuss potential solutions for achieving sustained performance improvements. Our study emphasizes on how to avoid forgetting old tasks during the continual learning process while swiftly adapting to the challenges of new tasks. Finally, we also discuss the future directions of LGL, covering potential impacts in the application domains, open issues, and specifically analyze their potential effects on sustained performance improvements. These discussions will help guide future research directions in LGL, promoting further development and application in this field.

       

    /

    返回文章
    返回