• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Liu Zhuang, Dong Zichen, Dong Yilin, Shang Jiaming, Zhang Fan, Chen Yuran, Lou Peiyan, Sun Xinran, Wang Yu, Zhao Jun, Wayne Lin. Lifelong Graph Learning: A Comprehensive Review[J]. Journal of Computer Research and Development, 2024, 61(8): 2067-2096. DOI: 10.7544/issn1000-1239.202440204
Citation: Liu Zhuang, Dong Zichen, Dong Yilin, Shang Jiaming, Zhang Fan, Chen Yuran, Lou Peiyan, Sun Xinran, Wang Yu, Zhao Jun, Wayne Lin. Lifelong Graph Learning: A Comprehensive Review[J]. Journal of Computer Research and Development, 2024, 61(8): 2067-2096. DOI: 10.7544/issn1000-1239.202440204

Lifelong Graph Learning: A Comprehensive Review

Funds: This work was supported by the National Natural Science Foundation of China (72272028).
More Information
  • Author Bio:

    Liu Zhuang: born in 1982. PhD, associate professor, master supervisor. His main research interests include natural language processing, fintech, graph neural networks, and LLM

    Dong Zichen: born in 2003. Undergraduate. Her main research interests include natural language processing, fintech, machine learning, and graph neural network

    Dong Yilin: born in 2004. Undergraduate. Her main research interests include natural language processing, fintech, machine learning, and graph neural networks

    Shang Jiaming: born in 2003. Undergraduate. His main research interests include natural language processing, fintech, machine learning, and graph neural network

    Zhang Fan: born in 2002. Undergraduate. Her main research interests include natural language processing, fintech, machine learning, and graph neural networks

    Chen Yuran: born in 2005. Undergraduate. Her main research interests include natural language processing, fintech, machine learning, and graph neural networks

    Lou Peiyan: born in 2004. Undergraduate. Her main research interests include natural language processing, fintech, machine learning, and graph neural networks

    Sun Xinran: born in 2004. Undergraduate. Her main research interests include natural language processing, fintech, machine learning, and graph neural networks

    Wang Yu: born in 2003. Undergraduate. Her main research interests include natural language processing, fintech, machine learning, and graph neural networks

    Zhao Jun: born in 1990. PhD. His main research interests include natural language processing, machine translation, and graph neural networks

    Wayne Lin: born in 1991. PhD candidate. Her main research interests include natural language processing, QA, and LLM

  • Received Date: March 14, 2024
  • Revised Date: April 22, 2024
  • Available Online: May 16, 2024
  • Lifelong Graph Learning (LGL) is an emerging field aimed at achieving continuous learning on graph-structured data, addressing the catastrophic forgetting issue in existing tasks and enabling models to adapt to newly emerging graph tasks in a sequential manner. Despite demonstrating strong learning capabilities, improving LGL's sustained performance remains a crucial challenge. To address the gaps in existing research, we provide a comprehensive survey and summary of recent developments in the field of LGL. Firstly, we reclassify existing methods of LGL, focusing particularly on approaches to overcome catastrophic forgetting. Secondly, we systematically analyze the strengths and weaknesses of these methods, and discuss potential solutions for achieving sustained performance improvements. Our study emphasizes on how to avoid forgetting old tasks during the continual learning process while swiftly adapting to the challenges of new tasks. Finally, we also discuss the future directions of LGL, covering potential impacts in the application domains, open issues, and specifically analyze their potential effects on sustained performance improvements. These discussions will help guide future research directions in LGL, promoting further development and application in this field.

  • [1]
    Sun Suyuan, Wu Chaoyun, Wu Changyao, et al. An empirical study of civil servants ’ lifelong E-learning continuance intention[C]//Proc of ICEB 2009 Proceedings. Macau, SAR China: ICEB, 2009: 420−430
    [2]
    Zhang Ziwei, Cui Peng, Zhu Wenwu. Deep learning on graphs: A survey[J]. IEEE Transactions on Knowledge and Data Engineering, 2020, 34(1): 249−270
    [3]
    Zhuang Fuzhen, Qi Zhiyuan, Duan Keyu, et al. A comprehensive survey on transfer learning[J]. Proceedings of the IEEE, 2020, 109(1): 43−76
    [4]
    Liu Bing. Lifelong machine learning: A paradigm for continuous learning[J]. Frontiers of Computer Science, 2017, 11(3): 359−361 doi: 10.1007/s11704-016-6903-6
    [5]
    Hong Xianbin, Wong Prudence, Liu Dawei, et al. Lifelong machine learning: Outlook and direction[C]//Proc of the 2nd Int Conf on Big Data Research. New York: ACM, 2018: 76−79
    [6]
    Mahesh B. Machine learning algorithms−a review[J]. International Journal of Science and Research, 2020, 9(1): 381−386
    [7]
    Chen Zhiyuan, Liu Bing. Lifelong Machine Learning[M]. San Rafael: Morgan & Claypool Publishers, 2018
    [8]
    Dong Songlin, Hong Xiaopeng, Tao Xiaoyu, et al. Few-shot class- incremental learning via relation knowledge distillation[C]//Proc of the 35th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2021: 1255−1263
    [9]
    Wang Kuansan, Shen Zhihong, Huang Chiyuan, et al. Microsoft academic graph: When experts are not enough[J]. Quantitative Science Studies, 2020, 1(1): 396−413 doi: 10.1162/qss_a_00021
    [10]
    Eckhoff M, Reiher M. Lifelong machine learning potentials[J]. Journal of Chemical Theory and Computation, 2023, 19(12): 3509−3525 doi: 10.1021/acs.jctc.3c00279
    [11]
    Zhang Peiyan, Yan Yuchen, Li Chaozhuo, et al. Continual learning on dynamic graphs via parameter isolation[C]//Proc of the 46th Int ACM SIGIR Conf on Research and Development in Information Retrieval. New York: ACM, 2023: 601−611
    [12]
    Verstaevel N, Boes J, Nigon J, et al. Lifelong machine learning with adaptive multi-agent systems[C]//Proc of the 9th Int Conf on Agents and Artificial Intelligence. Porto, Portugal: ICAART, 2017: 275−286
    [13]
    Ruvolo P, Eaton E. Active task selection for lifelong machine learning[C]//Proc of the 27th AAAI Conf on Artificial Intelligence. Bellevue, Washington: AAAI, 2013: 862−868
    [14]
    Xia Feng, Sun Ke, Yu Shuo, et al. Graph learning: A survey[J]. IEEE Transactions on Artificial Intelligence, 2021, 2(2): 109−127 doi: 10.1109/TAI.2021.3076021
    [15]
    Yang Menglin, Zhou Min, Kalander M, et al. Discrete-time temporal network embedding via implicit hierarchical learning in hyperbolic space[C]//Proc of the 27th ACM SIGKDD Conf on Knowledge Discovery & Data Mining. New York: ACM, 2021: 1975−1985
    [16]
    Skarding J, Gabrys B, Musial K. Foundations and modeling of dynamic networks using dynamic graph neural networks: A survey[J]. IEEE Access, 2021, 9: 79143−79168 doi: 10.1109/ACCESS.2021.3082932
    [17]
    Goyal P, Kamra N, He X, et al. DynGEM: Deep embedding method for dynamic graphs[J]. arXiv preprint, arXiv: 1805.11273, 2018
    [18]
    Zhou Lekui, Yang Yang, Ren Xiang, et al. Dynamic network embedding by modeling triadic closure process[C]//Proc of the 32nd AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2018: 571−578
    [19]
    Xue Guotong, Zhong Ming, Li Jianxin, et al. Dynamic network embedding survey[J]. Neurocomputing, 2022, 472: 212−223 doi: 10.1016/j.neucom.2021.03.138
    [20]
    Wang Chen, Qiu Yuheng, Gao Dasong, et al. Lifelong graph learning[C]//Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2022: 13719−13728
    [21]
    Tian Zonggui, Zhang Du, Dai Hongning. Continual learning on graphs: A survey[J]. arXiv preprint, arXiv: 2402.06330, 2024
    [22]
    Kazemi S M, Goel R, Jain K, et al. Representation learning for dynamic graphs: A survey[J]. Journal of Machine Learning Research, 2020, 21(70): 1−73
    [23]
    Barros C D T, Mendonça M R F, Vieira A B, et al. A survey on embedding dynamic graphs[J]. ACM Computing Surveys, 2021, 55(1): 1−37
    [24]
    Lopez-Paz D, Ranzato M A. Gradient episodic memory for continual learning[J]. Advances in Neural Information Processing Systems, 2017, 30: 6470−6479
    [25]
    Han Y, Karunasekera S, Leckie C. Graph neural networks with continual learning for fake news detection from social media[J]. arxiv preprint, arxiv: 2007.03316, 2020
    [26]
    Wang Yuening, Zhang Yingxue, Coates M. Graph structure aware contrastive knowledge distillation for incremental learning in recommender systems[C]//Proc of the 30th ACM Int Conf on Information & Knowledge Management. New York: ACM, 2021: 3518−3522
    [27]
    Ahrabian K, Xu Yishi, Zhang Yingxue, et al. Structure aware experience replay for incremental learning in graph-based recommender systems[C]//Proc of the 30th ACM Int Conf on Information & Knowledge Management. New York: ACM, 2021: 2832−2836
    [28]
    Xu Yishi, Zhang Yingxue, Guo Wei, et al. GraphSAIL: Graph structure aware incremental learning for recommender systems[C]//Proc of the 29th ACM Int Conf on Information &Knowledge Management. New York: ACM, 2020: 2861−2868
    [29]
    Ding Sihao, Feng Fuli, He Xiangnan, et al. Causal incremental graph convolution for recommender system retraining[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 35(4): 4718−4728
    [30]
    Jha K, Xun G, Zhang A. Continual representation learning for evolving biomedical bipartite networks[J]. Bioinformatics, 2021, 37(15): 2190−2197 doi: 10.1093/bioinformatics/btab067
    [31]
    Chen Xu, Wang Junshan, Xie Kunqing. TrafficStream: A streaming traffic flow forecasting framework based on graph neural networks and continual learning[J]. arXiv preprint, arXiv: 2106.06273, 2021
    [32]
    Wang Junshan, Song Guojie, Wu Yi, et al. Streaming graph neural networks via continual learning[C]//Proc of the 29th ACM Int Conf on Information & Knowledge Management. New York: ACM, 2020: 1515−1524
    [33]
    Zhou Fan, Cao Chengtai. Overcoming catastrophic forgetting in graph neural networks with experience replay[C]//Proc of the 35th AAAI Conf on Artificial Intelligence. Menlo Park, CA: AAAI, 2021: 4714−4722
    [34]
    Zhang Xikun, Song Dingjin, Tao Dacheng. Continual learning on graphs: Challenges, solutions, and opportunities[J]. arXiv preprint, arXiv: 2402.11565, 2024
    [35]
    Kou Xiaoyu, Lin Yankai, Liu Shaobo, et al. Disentangle-based continual graph representation learning[J]. arXiv preprint, arXiv: 2010.02565, 2020
    [36]
    Febrinanto F G, Xia F, Moore K, et al. Graph lifelong learning: A survey[J]. IEEE Computational Intelligence Magazine, 2023, 18(1): 32−51 doi: 10.1109/MCI.2022.3222049
    [37]
    Cai Jie, Wang Xin, Guan Chaoyu, et al. Multimodal continual graph learning with neural architecture search[C]//Proc of the ACM Web Conf 2022. New York: ACM, 2022: 1292−1300
    [38]
    Kim S, Yun S, Kang J. DyGRAIN: An incremental learning framework for dynamic graphs[C]//Proc of the 31st Int Joint Conf on Artificial Intelligence. Messe Wien, Vienna, Austria: IJCAI, 2022: 3157−3163
    [39]
    Luo Yadan, Huang Zi, Zhang Zheng, et al. Learning from the past: Continual meta-learning with Bayesian graph neural networks[C]//Proc of the 34th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 5021−5028
    [40]
    Gao Chen, Zheng Yu, Li Nian, et al. A survey of graph neural networks for recommender systems: Challenges, methods, and directions[J]. ACM Transactions on Recommender Systems, 2023, 1(1): 1−51
    [41]
    Ling Huan, Gao Jun, Kar A, et al. Fast interactive object annotation with curve-GCN[C]//Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 5257−5266
    [42]
    Han Kai, Wang Yunhe, Guo Jianyuan, et al. Vision GNN: An image is worth graph of nodes[J]. Advances in Neural Information Processing Systems, 2022, 35: 8291−8303
    [43]
    Liu Y, Hong X, Tao X, et al. Structural knowledge organization and transfer for class-incremental learning[C]//Proc of the 3rd ACM Int Conf on Multimedia in Asia. New York: ACM, 2021: 1−7
    [44]
    Galke L, Franke B, Zielke T, et al. Lifelong learning of graph neural networks for open-world node classification[C]//Proc of 2021 Int Joint Conf on Neural Networks (IJCNN). Piscataway, NJ: IEEE, 2021: 1−8
    [45]
    Gallardo J, Hayes T L, Kanan C. Self-supervised training enhances online continual learning[J]. arXiv preprint, arXiv: 2103.14010, 2021
    [46]
    Mehta S V, Patil D, Chandar S, et al. An empirical investigation of the role of pre-training in lifelong learning[J]. Journal of Machine Learning Research, 2023, 24(214): 1−50
    [47]
    Madaan D, Yoon J, Li Y, et al. Representational continuity for unsupervised continual learning[J]. arXiv preprint, arXiv: 2110.06976, 2021
    [48]
    Purushwalkam S, MorgadoP, Gupta A. The challenges of continuous self-supervised learning[G]//LNCS 13686: Proc of European Conf on Computer Vision. Berlin: Springer, 2022: 702−721
    [49]
    Sen P, Namata G, Bilgic M, et al. Collective classification in network data[J]. AI magazine, 2008, 29(3): 93−93 doi: 10.1609/aimag.v29i3.2157
    [50]
    Hamilton W, Ying Z, Leskovec J. Inductive representation learning on large graphs[C]//Advances in Neural Information Processing Systems, 2017: 1024−1034
    [51]
    Perozzi B, Al-Rfou R, Skiena S. Deepwalk: Online learning of social representations[C]//Proc of the 20th ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining. New York: ACM, 2014: 701−710
    [52]
    Grover A, Leskovec J. Node2Vec: Scalable feature learning for networks[C]//Proc of the 22nd ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining. New York: ACM, 2016: 855−864
    [53]
    Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks[J]. arxiv preprint, arxiv: 1609.02907, 2016
    [54]
    Lu Bin, Gan Xiaoying, Yang Lina, et al. Geometer: Graph few-shot class-incremental learning via prototype representation[C]//Proc of the 28th ACM SIGKDD Conf on Knowledge Discovery and Data Mining. New York: ACM, 2022: 1152−1161
    [55]
    Kingma D P, Ba J. Adam: A method for stochastic optimization[J]. arXiv preprint, arXiv: 1412.6980, 2014
    [56]
    Tieleman T. Lecture 6.5‐RMSprop: Divide the gradient by a running average of its recent magnitude[J]. COURSERA: Neural Networks for Machine Learning, 2012, 4(2): 26−31
    [57]
    Perini M, Ramponi G, Carbone P, et al. Learning on streaming graphs with experience replay[C]//Proc of the 37th ACM/SIGAPP Symp on Applied Computing. New York: ACM, 2022: 470−478
    [58]
    Rakaraddi A, Siew Kei L, Pratama M, et al. Reinforced continual learning for graphs[C]//Proc of the 31st ACM Int Conf on Information & Knowledge Management. New York: ACM, 2022: 1666−1674
    [59]
    Yu B, Yin H, Zhu Z. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting[J]. arxiv preprint, arxiv: 1709.04875, 2017
    [60]
    Kirkpatrick J, Pascanu R, Rabinowitz N, et al. Overcoming catastrophic forgetting in neural networks[J]. Proceedings of the National Academy of Sciences, 2017, 114(13): 3521−3526 doi: 10.1073/pnas.1611835114
    [61]
    Maltoni D, Lomonaco V. Continuous learning in single-incremental-task scenarios[J]. Neural Networks, 2019, 116: 56−73 doi: 10.1016/j.neunet.2019.03.010
    [62]
    Parisi G I, Tani J, Weber C, et al. Lifelong learning of spatiotemporal representations with dual-memory recurrent self-organization[J]. Frontiers in Neurorobotics, 2018, 12: 78
    [63]
    Zeng Jiaqi, Xie Pengtao. Contrastive self-supervised learning for graph classification[C]//Proc of the AAAI Conf on Artificial Intelligence. Menlo Park, CA: AAAI, 2021, 35(12): 10824−10832
    [64]
    Liu Qing, Majumder Orchid, Achille Alessandro, et al. Incremental few-shot meta-learning via indirect discriminant alignment[C]//Proc of Computer Vision–ECCV 2020. Berlin: Springer, 2020: 685−701
    [65]
    Yuan Hao, Yu Haiyang, Gui Shurui, et al. Explainability in graph neural networks: A taxonomic survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(5): 5782−5799
    [66]
    Ko J, Kang S, Kwon T, et al. Begin: Extensive benchmark scenarios and an easy-to-use framework for graph continual learning[J]. arxiv preprint, arxiv: 2211.14568, 2022
    [67]
    Ruder S. An overview of gradient descent optimization algorithms[J]. arxiv preprint, arxiv: 1609.04747, 2016
    [68]
    Mallya A, Lazebnik S. Packnet: Adding multiple tasks to a single network by iterative pruning[C]//Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018: 7765−7773
    [69]
    Su Junwei, Zou Difan, Zhang Zijun, et al. Towards robust graph incremental learning on evolving graphs[C]//Proc of Int Conf on Machine Learning. New York: PMLR, 2023: 32728−32748
    [70]
    De Cao N, KipfT. MolGAN: An implicit generative model for smallmolecular graphs[J]. arXiv preprint, arXiv: 1805.11973, 2018
    [71]
    Simonovsky M, Komodakis N. GraphVAE: Towards generation of small graphs using variational autoencoders[C]//Proc of the 27th Int Conf on Artificial Neural Networks. Berlin: Springer, 2018: 412−422
    [72]
    Shi Chence, Xu Minkai, Zhu Zhaocheng, et al. GraphAF: A flow- based autoregressive model for molecular graph generation[J]. arXiv preprint, arXiv: 2001.09382, 2020
    [73]
    You Jiaxuan, Liu Bowen, Ying Zhitao, et al. Graph convolutional policy network for goal-directed molecular graph generation[J]. Advances in Neural Information Processing Systems, 2018, 31: 6412−6422
    [74]
    You Jiaxuan, Ying Rex, Ren Xiang, et al. GraphRNN: Generating realistic graphs with deep auto-regressive models[C]//Proc of Int Conf on Machine Learning. Stockholmsmässan, Stockholm Sweden: PMLR, 2018: 5708−5717
    [75]
    Dai Hanjun, Nazi A, Li Yujia, et al. Scalable deep generative modeling for sparse graphs[C]//Proc of Int Conf on Machine Learning. Vienna, Austria: PMLR, 2020: 2302−2312
    [76]
    Liu Huihui, Yang Yiding, Wang Xinchao. Overcoming catastrophic forgetting in graph neural networks[C]//Proc of the 35th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2021: 8653−8661
    [77]
    Dineva K, Atanasova T. Systematic look at machine learning algorithms–advantages, disadvantages and practical applications[J]. International Multidisciplinary Scientific GeoConference: SGEM, 2020, 20(2.1): 317−324
    [78]
    Yuan H, Tang J, Hu X, et al. XGNN: Towards model-level explanations of graph neural networks[C]//Proc of ACM SIGKDD Conf on Knowledge Discovery and Data Mining. New York: ACM, 2020: 430−438
    [79]
    Sun Li, Ye Junda, Peng Hao, et al. Self-supervised continual graph learning in adaptive riemannian spaces[C]//Proc of the 37th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2023, 37: 4633−4642
    [80]
    You Yuning, Chen Tianlong, Sui Yongduo, et al. Graph contrastive learning with augmentations[J]. Advances in Neural Information Processing Systems, 2020, 33: 5812−5823
    [81]
    Ruvolo P, Eaton E. ELLA: An efficient lifelong learning algorithm[C]//Proc of Int Conf on Machine Learning. Atlanta, GA: PMLR, 2013: 507−515
    [82]
    Rusu A A, Rabinowitz N C, Desjardins G, et al. Progressive neural networks[J]. arXiv preprint, arXiv: 1606.04671, 2016
    [83]
    Lomonaco V, Maltoni D. Core50: A new dataset and benchmark for continuous object recognition[C]//Proc of Conf on Robot Learning. Mountain View, CA: PMLR, 2017: 17−26
    [84]
    Golkar S, Kagan M, Cho K. Continual learning via neural pruning[J]. arXiv preprint, arXiv: 1903.04476, 2019
    [85]
    Ahn H, Cha S, Lee D, et al. Uncertainty-based continual learning with adaptive regularization[J]. Advances in Neural Information Processing Systems, 2019, 32: 1−11
    [86]
    Gao Yang, Yang Hong, Zhang Peng, et al. GraphNAS: Graph neural architecture search with reinforcement learning[J]. arXiv preprint, arXiv: 1904.09981, 2019
    [87]
    Yoon J, Yang E, Lee J, et al. Lifelong learning with dynamically expandable networks[J]. arXiv preprint, arXiv: 1708.01547, 2017
    [88]
    Mallya A, Davis D, Lazebnik S. Piggyback: Adapting a single network to multiple tasks by learning to mask weights[C]//Proc of Computer Vision (ECCV 2018). Berlin: Springer, 2018: 67−82
    [89]
    Wang Liyuan, Zhang Xingxing, Su Hang, et al. A comprehensive survey of continual learning: Theory, method and application[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024: 1−20
    [90]
    Xia Jiafeng, Li Dongsheng, Gu Hansu, et al. Incremental graph convolutional network for collaborative filtering[C]//Proc of the 30th ACM Int Conf on Information & Knowledge Management. New York: ACM, 2021: 2170−2179
    [91]
    Qin Haiou, Zhang Du. A perpetual learning algorithm that incrementally improves performance with deliberation[J]. IEEE Access, 2020, 8: 131425−131438 doi: 10.1109/ACCESS.2020.3009718
    [92]
    Zhang Xikun, Song Dongjin, Tao Dacheng. Hierarchical prototype networks for continual graph representation learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(4): 4622−4636
    [93]
    Fortunato S, Bergstrom C T, Börner K, et al. Science of science[J]. Science, 2018, 359(6379): eaao0185 doi: 10.1126/science.aao0185
    [94]
    Rebuffi S A, Kolesnikov A, Sperl G, et al. iCaRL: Incremental classifier and representation learning[C]//Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 2001−2010
    [95]
    Cha H, Lee J, Shin J. Co2l: Contrastive continual learning[C]//Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition. Montreal. Piscataway, NJ: IEEE, 2021: 9516−9525
    [96]
    Zhang J O, Sax A, Zamir A, et al. Side-tuning: A baseline for network adaptation via additive side networks[C]//Proc of Computer Vision (ECCV 2020). Glasgow, UK: SIP, 2020: 698−714
    [97]
    Shon H, Lee J, Kim S H, et al. Dlcft: Deep linear continual fine-tuning for general incremental learning[G]//LNCS 13693: Proc of European Conf on Computer Vision. Berlin: Springer, 2022: 513−529
    [98]
    Boschini M, Bonicelli L, Porrello A, et al. Transfer without forgetting[G]//LNCS 13661: Proc of European Conference on Computer Vision. Berlin: Springer, 2022: 692−709
    [99]
    Cossu A, Tuytelaars T, Carta A, et al. Continual pre-training mitigates forgetting in language and vision[J]. arXiv preprint, arXiv: 2205.09357, 20221
    [100]
    Chen Z, Ma N, Liu B. Lifelong learning for sentiment classification[J]. arxiv preprint, arxiv: 1801.02808, 2018
    [101]
    Kmoblauch J, Husain H, Diethe T. optimal continual learning has perfect memory and is NP-haed[C]//proc of Int conf on Machine Learning, New York: PMLR, 2020: 5327−5337
    [102]
    Tang J, Zhang J, Yao L, et al. Arnetminer: Extraction and mining of academic social networks[C]//Proc of the 14th ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining. New York: ACM, 2008: 990−998
    [103]
    Liu Jin, Tan Guanxin, Lan Wei, et al. Identification of early mild cognitive impairment using multi-modal data and graph convolutional networks[J]. BMC Bioinformatics, 2020, 21: 1−12
    [104]
    Wang J, Zhang S, Xiao Y, et al. A review on graph neural network methods in financial applications[J]. arxiv preprint, arxiv: 2111.15367, 2021
    [105]
    Wu Shiwen, Sun Fei, Zhang Wentao, et al. Graph neural networks in recommender systems: A survey[J]. ACM Computing Surveys, 2022, 55(5): 1−37
    [106]
    Sharma K, Lee Y C, Nambi S, et al. A survey of graph neural networks for social recommender systems[J]. arxiv preprint, arxiv: 2212.04481, 2022
    [107]
    Liu zhuang, Huang Degen, Huang Kaiyu, et al. Per−trained financial language representation model for financial text mining[C]//Proc of the 29 th Int Toint wnf on Artificial Intelligenle. IJ CAI: ijcai·org, 2020, 4513−4519
    [108]
    He Xiangnan, Deng Kuan, Wang Xiang, et al. Lightgcn: Simplifying and powering graph convolution network for recommendation[C]//Proc of the 43rd Int ACM SIGIR Conf on Research and Development in Information Retrieval. New York: ACM, 2020: 639−648
    [109]
    Wu F, Souza A, Zhang T, et al. Simplifying graph convolutional networks[C]//Proc of Int Conf on Machine Learning. PMLR, 2019: 6861−6871
    [110]
    Zheng Yizhen, Pan Shirui, Lee V, et al. Rethinking and scaling up graph contrastive learning: An extremely efficient approach with group discrimination[J]. Advances in Neural Information Processing Systems, 2022, 35: 10809−10820
    [111]
    Wu Lirong, Lin Haitao, Tan Cheng, et al. Self-supervised learning on graphs: Contrastive, generative, or predictive[J]. IEEE Transactions on Knowledge and Data Engineering, 2021, 35(4): 4216−4235
    [112]
    Huang Qiang, Yamada M, Tian Yuan, et al. Graphlime: Local interpretable model explanations for graph neural networks[J]. IEEE Transactions on Knowledge and Data Engineering, 2022, 35(7): 6968−6972
    [113]
    Miao Siqi, Liu Mia, Li Pan. Interpretable and generalizable graph learning via stochastic attention mechanism[C]//Proc of Int Conf on Machine Learning. New York: PMLR, 2022: 15524−15543
    [114]
    Rymarczyk D, van de Weijer J, Zieliński B, et al. Icicle: Interpretable class incremental continual learning[C]//Proc of the IEEE/CVF Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2023: 1887−1898
    [115]
    Chen Z M, Wei X S, Wang P, et al. Multi-label image recognition with graph convolutional networks[C]//Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 5177−5186
    [116]
    Chen Chaoqi, Wu Yushuang, Dai Qiyuan, et al. A survey on graph neural networks and graph transformers in computer vision: A task- oriented perspective[J]. arxiv preprint, arxiv: 2209.13232, 2022
    [117]
    Lei C H, Chen Y H, Peng W H, et al. Class-incremental learning with rectified feature-graph preservation[C]//Proc of the Asian Conf on Computer Vision. Berlin: springer, 2020: 358−374
    [118]
    Cai H, Zheng VW, Chang KCC. A comprehensive survey of graphembedding: Problems, techniques, and applications[J]. IEEE Transactions on Knowledge and Data Engineering, 2018, 30(9): 1616−1637 doi: 10.1109/TKDE.2018.2807452
    [119]
    de Bruin T, Kober J, Tuyls K, et al. Improved deep reinforcement learning for robotics through distribution-based experience retention[C]//Proc of 2016 IEEE/RSJ Int Conf on Intelligent Robots and Systems (IROS). Piscataway, NJ: IEEE, 2016: 3947−3952
    [120]
    Ying R, He R, Chen K, et al. Graph convolutional neural networks for web-scale recommender systems[C]//Proc of the 24th ACM SIGKDD Int Conf on Knowledge Discovery & Data Mining. New York: ACM, 2018: 974−983
    [121]
    Javed K, White M. Meta-learning representations for continual learning[J]. Advances in Neural Information Processing Systems, 2019, 32: 1−1
  • Related Articles

    [1]Guo Husheng, Zhang Yutong, Wang Wenjian. Elastic Gradient Ensemble for Concept Drift Adaptation[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440407
    [2]Guo Husheng, Zhang Yang, Wang Wenjian. Two-Stage Adaptive Ensemble Learning Method for Different Types of Concept Drift[J]. Journal of Computer Research and Development, 2024, 61(7): 1799-1811. DOI: 10.7544/issn1000-1239.202330452
    [3]Guo Husheng, Cong Lu, Gao Shuhua, Wang Wenjian. Adaptive Classification Method for Concept Drift Based on Online Ensemble[J]. Journal of Computer Research and Development, 2023, 60(7): 1592-1602. DOI: 10.7544/issn1000-1239.202220245
    [4]Cai Derun, Li Hongyan. A Metric Learning Based Unsupervised Domain Adaptation Method with Its Application on Mortality Prediction[J]. Journal of Computer Research and Development, 2022, 59(3): 674-682. DOI: 10.7544/issn1000-1239.20200693
    [5]Cai Huan, Lu Kezhong, Wu Qirong, Wu Dingming. Adaptive Classification Algorithm for Concept Drift Data Stream[J]. Journal of Computer Research and Development, 2022, 59(3): 633-646. DOI: 10.7544/issn1000-1239.20201017
    [6]Yu Xian, Li Zhenyu, Sun Sheng, Zhang Guangxing, Diao Zulong, Xie Gaogang. Adaptive Virtual Machine Consolidation Method Based on Deep Reinforcement Learning[J]. Journal of Computer Research and Development, 2021, 58(12): 2783-2797. DOI: 10.7544/issn1000-1239.2021.20200366
    [7]Bai Chenjia, Liu Peng, Zhao Wei, Tang Xianglong. Active Sampling for Deep Q-Learning Based on TD-error Adaptive Correction[J]. Journal of Computer Research and Development, 2019, 56(2): 262-280. DOI: 10.7544/issn1000-1239.2019.20170812
    [8]Zhang Yuanpeng, Deng Zhaohong, Chung Fu-lai, Hang Wenlong, Wang Shitong. Fast Self-Adaptive Clustering Algorithm Based on Exemplar Score Strategy[J]. Journal of Computer Research and Development, 2018, 55(1): 163-178. DOI: 10.7544/issn1000-1239.2018.20160937
    [9]Ma Anxiang, Zhang Bin, Gao Kening, Qi Peng, and Zhang Yin. Deep Web Data Extraction Based on Result Pattern[J]. Journal of Computer Research and Development, 2009, 46(2): 280-288.
    [10]Dandan, Li Zusong, Wang Jian, Zhang Longbing, Hu Weiwu, Liu Zhiyong. Adaptive Stack Cache with Fast Address Generation[J]. Journal of Computer Research and Development, 2007, 44(1): 169-176.

Catalog

    Article views (1018) PDF downloads (166) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return