Multisensory Associative-Pair Learning: Evidence for 'Unitization' as a specialized mechanism

Multisensory Associative-Pair Learning: Evidence for ‘Unitization’ as a specialized mechanism Elan Barenholtz (elan.barenholtz@fau.edu) Department of Psychology, 777 Glades Road Boca Raton, FL 33433 Meredith Davidson (mdavid14@fau.edu) Department of Psychology, 777 Glades Road Boca Raton, FL 33433 David Lewkowicz (lewkowic@fau.edu) Department of Psychology, 777 Glades Road Boca Raton, FL 33433 Abstract Eichenbaum, 1997; Eichenbaum, 1997; Eichenbaum & Bunsey, 1995). This view is represented in a number of theories of face recognition which hold that associating the face and voice of an individual depends on integrating distinct informational streams into a single, ‘Personal Identity Node’, or PIN (Bruce & Young, 1986; Burton, Bruce, & Johnston, 1990; Ellis, Jones, & Mosdell, 1997). Unitizing multisensory properties may make multisensory object-knowledge more efficient, since each observed property of that object may be associated with all other, previously observed, properties via a single link, rather than maintaining associations among many disparate properties. An additional potential advantage to a unitized representation, implicit in the PIN model, is that it may help to organize associations that go beyond specific stimulus- stimulus pairings to more abstract properties of an underlying ‘object’. For example, if one has encountered a specific auditory utterance of an individual, along with his or her face, it would be advantageous to associate a different utterance by the same individual with that face. Presumably, this depends on extracting ‘invariant’ properties of the underlying voice from the sample. Representing individual face and voice stimuli as properties of the same underlying individual may facilitate this process. Despite the potential theoretical advantages to unitization, there has been no direct behavioral support for the idea that multisensory unitization is a specialized form of associative learning. In the current study, we compared associative learning of visual/auditory pairs under conditions where the members of the pair were either likely or unlikely to belong to the same object by virtue of their membership in the same or different category. Specifically, we compared face/voice learning when the members of each pair were of the same or opposite gender (Experiment 1) or the same or different species (Experiment 2). We reasoned that since only congruent pairs are consistent with belonging to the same object (for example, our experience is that people with male faces always have male voices) they would be likely to be Learning about objects typically involves the association of multisensory attributes. Here, we present three experiments supporting the existence of a specialized form of associative learning that depends on ‘unitization’. When multisensory pairs (e.g. faces and voices) were likely to both belong to a single object, learning was superior than when the pairs were not likely to belong to the same object. Experiment 1 found that learning of face-voice pairs was superior when the members of each pair were the same gender vs. opposite gender. Experiment 2 found a similar result when the paired associates were pictures and vocalizations of the same species vs. different species (dogs and birds). In Experiment 3, gender-incongruent video and audio stimuli were dubbed, producing an artificially unitized stimulus reducing the congruency advantage. Overall, these results suggest that unitizing multisensory attributes into a single object or identity is a specialized form of associative learning Introduction Learning about objects typically involves the detection and association of multisensory attributes. For example, we may be able to identify certain foods based on their visual, gustatory, tactile as well as olfactory properties. Likewise, ‘knowing’ a person typically means being able to associate his or her face with his or her voice. How do we encode the multisensory properties of objects? One possibility is that such “object knowledge” simply consists of a network of associations among each of an object’s unisensory properties. According to this view, our knowledge about unitary objects may depend on the same learning mechanisms as other types of object memory, such as associations between different objects or between objects and other properties of the environments in which they appear. A second possibility is that multiple unisensory object properties are all linked via an intermediate ‘supramodal’ representation of the object (Mesulam, 1998). According to this view, associating intra-object information is a special class of associative learning, involving the creation of a ‘unitized’ representation (Cohen, Poldrack, &

[1]  M. Mesulam,et al.  From sensation to cognition. , 1998, Brain : a journal of neurology.

[2]  J Downes,et al.  Theories of organic amnesia. , 1997, Memory.

[3]  N. Kanwisher,et al.  The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception , 1997, The Journal of Neuroscience.

[4]  T. Allison,et al.  Face-sensitive regions in human extrastriate cortex studied by functional MRI. , 1995, Journal of neurophysiology.

[5]  A. Young,et al.  Understanding face recognition. , 1986, British journal of psychology.

[6]  R A Johnston,et al.  Understanding face recognition with an interactive activation model. , 1990, British journal of psychology.

[7]  I. Gauthier,et al.  Expertise for cars and birds recruits brain areas involved in face recognition , 2000, Nature Neuroscience.

[8]  H Eichenbaum,et al.  Declarative memory: insights from cognitive neurobiology. , 1997, Annual review of psychology.

[9]  A. Treisman,et al.  A feature-integration theory of attention , 1980, Cognitive Psychology.

[10]  D. Lewkowicz The Ontogeny of Human Multisensory Object Perception: A Constructivist Account , 2010 .

[11]  Dylan M. Jones,et al.  Intra- and inter-modal repetition priming of familiar faces and voices. , 1997, British journal of psychology.

[12]  H. Eichenbaum,et al.  On the Binding of Associations in Memory: Clues From Studies on the Role of the Hippocampal Region in Paired-Associate Learning , 1995 .

[13]  H Eichenbaum,et al.  Memory for items and memory for relations in the procedural/declarative memory framework. , 1997, Memory.

[14]  M. Tarr,et al.  Becoming a “Greeble” Expert: Exploring Mechanisms for Face Recognition , 1997, Vision Research.

[15]  R. Zatorre,et al.  Voice-selective areas in human auditory cortex , 2000, Nature.

[16]  Stefan J. Kiebel,et al.  Simulation of talking faces in the human brain improves auditory speech recognition , 2008, Proceedings of the National Academy of Sciences.