A Computational Model of Auditory Perceptual Learning: Predicting Learning Interference Across Multiple Tasks

A Computational Model of Auditory Perceptual Learning: Predicting Learning Interference Across Multiple Tasks David Little (d-little@u.northwestern.edu) Department of Electrical Engineering and Computer Science, Northwestern University Evanston, IL 60208 Bryan Pardo (pardo@northwestern.edu) Department of Electrical Engineering and Computer Science, Northwestern University Evanston, IL 60208 Beverly Wright (b-wright@northwestern.edu) Department of Communication Sciences and Disorders, Northwestern University Evanston, IL 60208, USA Abstract In this work we build a computational model of several au- ditory perceptual learning experiments. The modeled experi- ments show a pattern of learning interference which may help shed light on the structure of both short and long term stores of perceptual memory. It is our hypothesis that the observed interference patterns can be explained by the relationship of stimuli across tasks and how these relationships interact with the limits of human memory. We account for the fact that in- formation is shared across tasks in our model through use of methodology from the machine learning community on trans- fer learning. When we introduce a set of plausible limits on memory, such a model demonstrates the same pattern of learn- ing interference observed in the human experiments. Keywords: Perceptual Learning; Perceptual Memory; Con- solidation; Acquisition; Learning Interference; Transfer Learning Introduction With sufficient practice, human beings are able to enhance the acuity of their sensory systems. This is known in the literature as perceptual learning. Recent work in perceptual learning (e.g. Banai et al., 2009; Yotsumoto et al., 2008), has shown that learning on one task (which we call the target) may be prevented when a second task (which we call the distractor) is practiced either during or shortly after practice of the tar- get: this is called learning interference. These results suggest distinct properties of short and long term stores of perceptual memory because what interfered with learning during prac- tice was distinct from what interfered after practice (see the Human Data section for more detail). Our working hypothesis is that the learning interference observed in these experiments is a consequence of how infor- mation is shared across tasks and the limits of human mem- ory. We have built a computational model in an effort towards fully specifying and testing this hypothesis (see the Model- ing section for details). An ideal observer would only benefit from sharing information across tasks. However, with the in- troduction of limited memory, sharing information can also lead to learning interference. Such sharing of information across tasks is used to accom- plish transfer learning in the machine learning community. We call a computational technique intended to accomplish transfer learning, computational transfer learning. If a sys- tem (living or machine) can be seen to have better perfor- mance on one task after experience on some prior task, we call this observable transfer learning. Prior computational models of perceptual learning, though they have considered observable transfer learning, have ignored matters of compu- tational transfer learning, either by modeling only a single task (e.g. Jacobs, 2009) or by treating learning across several tasks as a single monolithic learning problem (e.g. Petrov et al., 2005). Because of this, none of these models provide an account of how people appropriately segregate and share in- formation across tasks. There are computational models con- cerned with human memory that can be understood to have some form of computational transfer learning (e.g. McClel- land et al., 1995; Anderson, 2002), but these systems do not provide the detail needed to model the current experiments. In this paper we model one set of learning interference ex- periments (Wright et al., 2009; Banai et al., 2009) using an ideal observer (Geisler, 2003). We do this by incorporating a method used for computational transfer learning (Roy & Kaelbling, 2007) (see the method section for details). On top of this ideal observer, we introduce a plausible set of mem- ory limits. This approach has the merit of avoiding conflation between task constraints (which both humans and the ideal observer are subject to) and psychological constraints (which only humans are subject to). We hypothesize memory lim- its that a.) affect the number of distinct stimuli that could be remembered and that b.) introduce a process of consol- idation, meaning that over a period of time memories move from a labile, short term form to a stable long term form. We found that when introducing all (and only all) of our limits, our model demonstrated the same pattern of learning interfer- ence observed in humans (see the evaluation for details). Human Data The experiments in Banai et al. (2009) and Wright et al. (2009) suggest two functionally distinct stages of perceptual learning. The first stage occurs during practice of a task. We call this stage acquisition. The second stage occurs af- ter practice is complete and is called consolidation. This is

[1]  I. Izquierdo,et al.  Separate mechanisms for short- and long-term memory , 1999, Behavioural Brain Research.

[2]  W. Geisler Ideal Observer Analysis , 2002 .

[3]  Leslie Pack Kaelbling,et al.  Efficient Bayesian Task-Level Transfer Learning , 2007, IJCAI.

[4]  John R. Anderson ACT: A simple theory of complex cognition. , 1996 .

[5]  Paul Fearnhead,et al.  Particle filters for mixture models with an unknown number of components , 2004, Stat. Comput..

[6]  Radford M. Neal Markov Chain Sampling Methods for Dirichlet Process Mixture Models , 2000 .

[7]  B. Wright,et al.  Learning two things at once: differential constraints on the acquisition and consolidation of perceptual learning , 2010, Neuroscience.

[8]  David Little,et al.  A Computational Model of Auditory Perceptual Learning , 2011 .

[9]  Kuansan Wang,et al.  Self-normalization and noise-robustness in early auditory representations , 1994, IEEE Trans. Speech Audio Process..

[10]  M. Escobar,et al.  Markov Chain Sampling Methods for Dirichlet Process Mixture Models , 2000 .

[11]  Yuka Sasaki,et al.  Different Dynamics of Performance and Brain Activation in the Time Course of Perceptual Learning , 2008, Neuron.

[12]  N. Cowan What are the differences between long-term, short-term, and working memory? , 2008, Progress in brain research.

[13]  Andrew T. Sabin,et al.  Perceptual learning: how much daily training is enough? , 2007, Experimental Brain Research.

[14]  Dean V Buonomano,et al.  Book Review: How Do We Tell Time? , 2002, The Neuroscientist : a review journal bringing neurobiology, neurology and psychiatry.

[15]  James L. McClelland,et al.  Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. , 1995, Psychological review.

[16]  David B. Dunson,et al.  Bayesian Data Analysis , 2010 .

[17]  D. Buonomano,et al.  Learning and Generalization of Auditory Temporal–Interval Discrimination in Humans , 1997, The Journal of Neuroscience.

[18]  D V Buonomano,et al.  Decoding Temporal Information: A Model Based on Short-Term Synaptic Plasticity , 2000, The Journal of Neuroscience.

[19]  Lawrence Carin,et al.  Multi-Task Learning for Classification with Dirichlet Process Priors , 2007, J. Mach. Learn. Res..

[20]  J. D. McGaugh Memory--a century of consolidation. , 2000, Science.

[21]  H. Levitt Transformed up-down methods in psychoacoustics. , 1971, The Journal of the Acoustical Society of America.

[22]  B. Dosher,et al.  The dynamics of perceptual learning: an incremental reweighting model. , 2005, Psychological review.

[23]  Robert A Jacobs,et al.  Adaptive precision pooling of model neuron activities predicts the efficiency of human visual learning. , 2009, Journal of vision.