Toward a More Neurally Plausible Neural Network Model of Latent Cause Inference

Humans spontaneously perceive a continuous stream of experience as discrete events. It has been hypothesized that this ability is supported by latent cause inference (LCI). We implemented this hypothesis using latent cause network (LCNet), a neural network model of LCI. LCNet interacts with a Bayesian LCI mechanism that activates a unique context vector for each inferred latent cause (LC). LCNet can also recall episodic memories of previously inferred LCs to avoid performing LCI all the time. These mechanisms make LCNet more neurally plausible and efficient than existing models. Across three simulations, we found that LCNet could 1) extract shared structure across LCs while avoiding catastrophic interference, 2) capture human data on curriculum effects on schema learning, and 3) infer the underlying event structure when processing naturalistic videos of daily activities. Our work provides a neurally plausible computational model that can operate in both laboratory experiment settings and naturalistic settings, opening up the possibility of providing a unified model of event cognition.

[1]  Andrew M. Saxe,et al.  Continual task learning in natural and artificial agents , 2022, Trends in Neurosciences.

[2]  Jeffrey M. Zacks,et al.  The multi-angle extended three-dimensional activities (META) stimulus set: A tool for studying event cognition , 2022, Behavior Research Methods.

[3]  Yeon Soon Shin,et al.  Structuring Memory Through Inference-Based Event Segmentation , 2020, Top. Cogn. Sci..

[4]  Samuel J. Gershman,et al.  Structured event memory: a neuro-symbolic model of event cognition , 2019, bioRxiv.

[5]  Stefan Wermter,et al.  Continual Lifelong Learning with Neural Networks: A Review , 2018, Neural Networks.

[6]  Jeffrey M. Zacks,et al.  Event boundaries in memory and cognition , 2017, Current Opinion in Behavioral Sciences.

[7]  Y. Niv,et al.  Discovering latent causes in reinforcement learning , 2015, Current Opinion in Behavioral Sciences.

[8]  James L. McClelland Incorporating rapid neocortical learning of new schema-consistent information into complementary learning systems theory. , 2013, Journal of experimental psychology. General.

[9]  Jeffrey M. Zacks,et al.  Segmentation in the perception and memory of events , 2008, Trends in Cognitive Sciences.

[10]  Jonathan D. Cohen,et al.  Prefrontal cortex and flexible cognitive control: rules without symbols. , 2005, Proceedings of the National Academy of Sciences of the United States of America.

[11]  R. O’Reilly,et al.  Modeling hippocampal and neocortical contributions to recognition memory: a complementary-learning-systems approach. , 2003, Psychological review.

[12]  Randall C. O’Reilly,et al.  Hippocampal and neocortical contributions to memory: advances in the complementary learning systems framework , 2002, Trends in Cognitive Sciences.

[13]  R. French Catastrophic forgetting in connectionist networks , 1999, Trends in Cognitive Sciences.

[14]  James L. McClelland,et al.  Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. , 1995, Psychological review.

[15]  James L. McClelland,et al.  On the control of automatic processes: a parallel distributed processing account of the Stroop effect. , 1990, Psychological review.

[16]  Jeffrey M. Zacks,et al.  Event structure in perception and conception. , 2001, Psychological bulletin.

[17]  Michael McCloskey,et al.  Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem , 1989 .