Towards Robust Graph Incremental Learning on Evolving Graphs

Incremental learning is a machine learning approach that involves training a model on a sequence of tasks, rather than all tasks at once. This ability to learn incrementally from a stream of tasks is crucial for many real-world applications. However, incremental learning is a challenging problem on graph-structured data, as many graph-related problems involve prediction tasks for each individual node, known as Nodewise Graph Incremental Learning (NGIL). This introduces non-independent and non-identically distributed characteristics in the sample data generation process, making it difficult to maintain the performance of the model as new tasks are added. In this paper, we focus on the inductive NGIL problem, which accounts for the evolution of graph structure (structural shift) induced by emerging tasks. We provide a formal formulation and analysis of the problem, and propose a novel regularization-based technique called StructuralShift-Risk-Mitigation (SSRM) to mitigate the impact of the structural shift on catastrophic forgetting of the inductive NGIL problem. We show that the structural shift can lead to a shift in the input distribution for the existing tasks, and further lead to an increased risk of catastrophic forgetting. Through comprehensive empirical studies with several benchmark datasets, we demonstrate that our proposed method, Structural-Shift-RiskMitigation (SSRM), is flexible and easy to adapt to improve the performance of state-of-the-art GNN incremental learning frameworks in the inductive setting. Department of Computer Science, University of Hong Kong Department of Computer Science, University of Wu Han. Correspondence to: Junwei Su <junweisu@connect.hku.hk>. Proceedings of the 40 th International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). Implementation available at: https://github.com/ littleTown93/NGIL_Evolve

[1]  Chuan Wu,et al.  Towards Robust Inductive Graph Incremental Learning via Experience Replay , 2023, ArXiv.

[2]  Jaewoo Kang,et al.  DyGRAIN: An Incremental Learning Framework for Dynamic Graphs , 2022, IJCAI.

[3]  Xin Wang,et al.  Multimodal Continual Graph Learning with Neural Architecture Search , 2022, WWW.

[4]  Ruocheng Guo,et al.  Graph Few-shot Class-incremental Learning , 2021, Web Search and Data Mining.

[5]  Dacheng Tao,et al.  Hierarchical Prototype Networks for Continual Graph Representation Learning , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  Mark Coates,et al.  Structure Aware Experience Replay for Incremental Learning in Graph-based Recommender Systems , 2021, CIKM.

[7]  Nitesh V. Chawla,et al.  FILDNE: A Framework for Incremental Learning of Dynamic Networks Embeddings , 2021, Knowl. Based Syst..

[8]  Fan Zhou,et al.  Overcoming Catastrophic Forgetting in Graph Neural Networks with Experience Replay , 2021, AAAI.

[9]  Davide Bacciu,et al.  Catastrophic Forgetting in Deep Graph Networks: an Introductory Benchmark for Graph Classification , 2021, ArXiv.

[10]  Kaushik Roy,et al.  Gradient Projection Memory for Continual Learning , 2021, ICLR.

[11]  Sonia Chernova,et al.  Continual Learning of Knowledge Graph Embeddings , 2021, IEEE Robotics and Automation Letters.

[12]  Magdalena Biesialska,et al.  Continual Lifelong Learning in Natural Language Processing: A Survey , 2020, COLING.

[13]  Yiding Yang,et al.  Overcoming Catastrophic Forgetting in Graph Neural Networks , 2020, AAAI.

[14]  Chong You,et al.  Incremental Learning via Rate Reduction , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  Jie Zhou,et al.  Disentangle-based Continual Graph Representation Learning , 2020, EMNLP.

[16]  Yi Wu,et al.  Streaming Graph Neural Networks via Continual Learning , 2020, CIKM.

[17]  S. Scherer,et al.  Lifelong Graph Learning , 2020, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Wei Guo,et al.  GraphSAIL: Graph Structure Aware Incremental Learning for Recommender Systems , 2020, CIKM.

[19]  Marie-Francine Moens,et al.  Online Continual Learning from Imbalanced Data , 2020, ICML.

[20]  Yi Han,et al.  Graph Neural Networks with Continual Learning for Fake News Detection from Social Media , 2020, ArXiv.

[21]  Ansgar Scherp,et al.  Lifelong Learning of Graph Neural Networks for Open-World Node Classification , 2020, 2021 International Joint Conference on Neural Networks (IJCNN).

[22]  Tom Diethe,et al.  Optimal Continual Learning has Perfect Memory and is NP-hard , 2020, ICML.

[23]  J. Leskovec,et al.  Open Graph Benchmark: Datasets for Machine Learning on Graphs , 2020, NeurIPS.

[24]  Amaury Habrard,et al.  A survey on domain adaptation theory , 2020, ArXiv.

[25]  Joelle Pineau,et al.  Online Learned Continual Compression with Adaptive Quantization Modules , 2019, ICML.

[26]  Mehrdad Farajtabar,et al.  Orthogonal Gradient Descent for Continual Learning , 2019, AISTATS.

[27]  Tinne Tuytelaars,et al.  A Continual Learning Survey: Defying Forgetting in Classification Tasks , 2019, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[28]  Yandong Guo,et al.  Large Scale Incremental Learning , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Benedikt Pfülb,et al.  A comprehensive, application-oriented study of catastrophic forgetting in DNNs , 2019, ICLR.

[30]  Yoshua Bengio,et al.  Gradient based sample selection for online continual learning , 2019, NeurIPS.

[31]  Sung Ju Hwang,et al.  Scalable and Order-robust Continual Learning with Additive Parameter Decomposition , 2019, ICLR.

[32]  Jiliang Tang,et al.  Streaming Graph Neural Networks , 2018, SIGIR.

[33]  Andrea Montanari,et al.  Contextual Stochastic Block Models , 2018, NeurIPS.

[34]  Ryan A. Rossi,et al.  Continuous-Time Dynamic Network Embeddings , 2018, WWW.

[35]  Stefan Wermter,et al.  Continual Lifelong Learning with Neural Networks: A Review , 2018, Neural Networks.

[36]  Sung Ju Hwang,et al.  Lifelong Learning with Dynamically Expandable Networks , 2017, ICLR.

[37]  Stephan Günnemann,et al.  Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking , 2017, ICLR.

[38]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[39]  Marc'Aurelio Ranzato,et al.  Gradient Episodic Memory for Continual Learning , 2017, NIPS.

[40]  Jiwon Kim,et al.  Continual Learning with Deep Generative Replay , 2017, NIPS.

[41]  Andrei A. Rusu,et al.  Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.

[42]  Junmo Kim,et al.  Less-forgetting Learning in Deep Neural Networks , 2016, ArXiv.

[43]  Derek Hoiem,et al.  Learning without Forgetting , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[44]  Zoubin Ghahramani,et al.  Training generative neural networks via Maximum Mean Discrepancy optimization , 2015, UAI.

[45]  A. Bifet,et al.  A survey on concept drift adaptation , 2014, ACM Comput. Surv..

[46]  Bernhard Schölkopf,et al.  A Kernel Two-Sample Test , 2012, J. Mach. Learn. Res..

[47]  Dacheng Tao,et al.  CGLB: Benchmark Tasks for Continual Graph Learning , 2022, NeurIPS.

[48]  Philip S. Yu,et al.  A Comprehensive Survey on Graph Neural Networks , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[49]  Indr.e vZliobait.e,et al.  Learning under Concept Drift: an Overview , 2010, ArXiv.