Abstract:
Lifelong Graph Learning (LGL) is an emerging field aimed at achieving continuous learning on graph-structured data, addressing the catastrophic forgetting issue in existing tasks and enabling models to adapt to newly emerging graph tasks in a sequential manner. Despite demonstrating strong learning capabilities, improving LGL's sustained performance remains a crucial challenge. To address the gaps in existing research, we provide a comprehensive survey and summary of recent developments in the field of LGL. Firstly, we reclassify existing methods of LGL, focusing particularly on approaches to overcome catastrophic forgetting. Secondly, we systematically analyze the strengths and weaknesses of these methods, and discuss potential solutions for achieving sustained performance improvements. Our study emphasizes on how to avoid forgetting old tasks during the continual learning process while swiftly adapting to the challenges of new tasks. Finally, we also discuss the future directions of LGL, covering potential impacts in the application domains, open issues, and specifically analyze their potential effects on sustained performance improvements. These discussions will help guide future research directions in LGL, promoting further development and application in this field.