Continual Causality: A Retrospective of the Inaugural AAAI-23 Bridge Program

Both of the fields of continual learning and causality investigate complementary aspects of human cognition and are fundamental components of artificial intelligence if it is to reason and generalize in complex environments. Despite the burgeoning interest in investigating the intersection of the two fields, it is currently unclear how causal models may describe continuous streams of data and vice versa, how continual learning may exploit learned causal structure. We proposed to bridge this gap through the inaugural AAAI-23 “Contin-ual Causality” bridge program, where our aim was to take the initial steps towards a unified treatment of these fields by providing a space for learning, discussions, and to build a diverse community to connect researchers. The activities ranged from traditional tutorials and software labs, invited vision talks, and contributed talks based on submitted position papers, as well as a panel and breakout discussions. Whereas materials are publicly disseminated as a foundation for the community: https://www.continualcausality.org , respectively discussed ideas, challenges, and prospects beyond the inaugural bridge are summarized in this retrospective paper.

[1]  Neil R. Bramley,et al.  Active causal structure learning in continuous time , 2022, Cognitive Psychology.

[2]  Yejin Choi The Curious Case of Commonsense Intelligence , 2022, Daedalus.

[3]  Praveen K. Pilly,et al.  Biological underpinnings for lifelong learning machines , 2022, Nature Machine Intelligence.

[4]  Abbavaram Gowtham Reddy,et al.  Matching Learned Causal Effects of Neural Networks with Domain Priors , 2021, ICML.

[5]  K. Kersting,et al.  CLEVA-Compass: A Continual Learning EValuation Assessment Compass to Promote Research Transparency and Comparability , 2021, ICLR.

[6]  Simone Calderara,et al.  Avalanche: an End-to-End Library for Continual Learning , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[7]  Tyler L. Hayes,et al.  Replay in Deep Learning: Current Approaches and Missing Biological Elements , 2021, Neural Computation.

[8]  Chunyan Miao,et al.  Distilling Causal Effect of Data in Class-Incremental Learning , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  Andrei A. Rusu,et al.  Embracing Change: Continual Learning in Deep Neural Networks , 2020, Trends in Cognitive Sciences.

[10]  Yoshua Bengio,et al.  Learning Causal Models Online , 2020, ArXiv.

[11]  Masashi Sugiyama,et al.  Few-shot Domain Adaptation by Causal Mechanism Transfer , 2020, ICML.

[12]  Bob Rehder,et al.  A Causal Model Approach to Dynamic Control , 2018, CogSci.

[13]  Bernhard Schölkopf,et al.  Elements of Causal Inference: Foundations and Learning Algorithms , 2017 .

[14]  Bing Liu,et al.  Lifelong machine learning: a paradigm for continuous learning , 2017, Frontiers of Computer Science.

[15]  J. Pearl,et al.  Causal inference and the data-fusion problem , 2016, Proceedings of the National Academy of Sciences.

[16]  Julian N. Marewski,et al.  What can the brain teach us about building artificial intelligence? , 2016, Behavioral and Brain Sciences.

[17]  Elias Bareinboim,et al.  External Validity: From Do-Calculus to Transportability Across Populations , 2014, Probabilistic and Causal Inference.

[18]  Jin Tian,et al.  Identifying Dynamic Sequential Plans , 2008, UAI.

[19]  E. Bareinboim,et al.  Generalized Transportability:Synthesis of Experiments from Heterogeneous Domains , 2019 .

[20]  Joris M. Mooij,et al.  Domain Adaptation by Using Causal Inference to Predict Invariant Conditional Distributions , 2017, NeurIPS.