Segregating two simultaneous sounds in elevation using temporal envelope: Human psychophysics and a physiological model.

The ability to segregate simultaneous sound sources based on their spatial locations is an important aspect of auditory scene analysis. While the role of sound azimuth in segregation is well studied, the contribution of sound elevation remains unknown. Although previous studies in humans suggest that elevation cues alone are not sufficient to segregate simultaneous broadband sources, the current study demonstrates they can suffice. Listeners segregating a temporally modulated noise target from a simultaneous unmodulated noise distracter differing in elevation fall into two statistically distinct groups: one that identifies target direction accurately across a wide range of modulation frequencies (MF) and one that cannot identify target direction accurately and, on average, reports the opposite direction of the target for low MF. A non-spiking model of inferior colliculus neurons that process single-source elevation cues suggests that the performance of both listener groups at the population level can be accounted for by the balance of excitatory and inhibitory inputs in the model. These results establish the potential for broadband elevation cues to contribute to the computations underlying sound source segregation and suggest a potential mechanism underlying this contribution.

[1]  A. Opstal,et al.  Sound Localization Under Perturbed Binaural Hearing , 2007 .

[2]  Russell L. Martin,et al.  Neurons in the inferior colliculus of cats sensitive to sound-source elevation , 1990, Hearing Research.

[3]  E. Young,et al.  Spectral Edge Sensitivity in Neural Circuits of the Dorsal Cochlear Nucleus , 2005, The Journal of Neuroscience.

[4]  Matthew J. Roos,et al.  The role of broadband inhibition in the rate representation of spectral cues for sound localization in the inferior colliculus , 2008, Hearing Research.

[5]  Jeffrey S. Johnson,et al.  Amplitude modulation detection as a function of modulation frequency and stimulus duration: Comparisons between macaques and humans , 2011, Hearing Research.

[6]  C Alain,et al.  Location and frequency cues in auditory selective attention. , 2001, Journal of experimental psychology. Human perception and performance.

[7]  A. J. King,et al.  Role of auditory cortex in sound localization in the midsagittal plane. , 2007, Journal of neurophysiology.

[8]  D. M. Green,et al.  Sound localization by human listeners. , 1991, Annual review of psychology.

[9]  A. V. van Opstal,et al.  Pinna Cues Determine Orienting Response Modes to Synchronous Sounds in Elevation , 2010, The Journal of Neuroscience.

[10]  K. O’Connor,et al.  Global Spectral and Location Effects in Auditory Perceptual Grouping , 2000, Journal of Cognitive Neuroscience.

[11]  Virginia Best,et al.  Separation of concurrent broadband sound sources by human listeners. , 2004, The Journal of the Acoustical Society of America.

[12]  Barbara G Shinn-Cunningham,et al.  A sound element gets lost in perceptual competition , 2007, Proceedings of the National Academy of Sciences.

[13]  I. Nelken,et al.  Two separate inhibitory mechanisms shape the responses of dorsal cochlear nucleus type IV units to narrowband and wideband stimuli. , 1994, Journal of neurophysiology.

[14]  Paul M. Hofman,et al.  Relearning sound localization with new ears , 1998, Nature Neuroscience.

[15]  Daniel J Tollin,et al.  Spectral cues explain illusory elevation effects with stereo sounds in cats. , 2003, Journal of neurophysiology.

[16]  S. Bacon,et al.  The effects of hearing loss and noise masking on the masking release for speech in temporally complex backgrounds. , 1998, Journal of speech, language, and hearing research : JSLHR.

[17]  F. Asano,et al.  Role of spectral cues in median plane localization. , 1990, The Journal of the Acoustical Society of America.

[18]  Yan Gai,et al.  Behavioral and modeling studies of sound localization in cats: effects of stimulus level and duration. , 2013, Journal of neurophysiology.

[19]  E. C. Cherry Some Experiments on the Recognition of Speech, with One and with Two Ears , 1953 .

[20]  H. Gustafsson,et al.  Masking of speech by amplitude-modulated noise. , 1994, The Journal of the Acoustical Society of America.

[21]  K. A. Davis,et al.  Auditory Processing of Spectral Cues for Sound Localization in the Inferior Colliculus , 2003, Journal of the Association for Research in Otolaryngology.

[22]  William A. Yost,et al.  Localizing the sources of two independent noises: role of time varying amplitude differences. , 2013, The Journal of the Acoustical Society of America.

[23]  A. V. van Opstal,et al.  Binaural weighting of pinna cues in human sound localization , 2003, Experimental Brain Research.

[24]  J. Hebrank,et al.  Spectral cues used in the localization of sound sources on the median plane. , 1974, The Journal of the Acoustical Society of America.

[25]  Lee M. Miller,et al.  Auditory attentional control and selection during cocktail party listening. , 2010, Cerebral cortex.

[26]  E D Young,et al.  Neural organization and responses to complex stimuli in the dorsal cochlear nucleus. , 1992, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.

[27]  R. Butler,et al.  Factors that influence the localization of sound in the vertical plane. , 1968, The Journal of the Acoustical Society of America.

[28]  P M Hofman,et al.  Spectro-temporal factors in two-dimensional human sound localization. , 1998, The Journal of the Acoustical Society of America.

[29]  H. Versnel,et al.  Involvement of Monkey Inferior Colliculus in Spatial Hearing , 2004, The Journal of Neuroscience.

[30]  Mark A. Bee,et al.  Dip listening and the cocktail party problem in grey treefrogs: signal recognition in temporally fluctuating noise , 2011, Animal Behaviour.

[31]  R. Plomp,et al.  Effects of fluctuating noise and interfering speech on the speech-reception threshold for impaired and normal hearing. , 1990, The Journal of the Acoustical Society of America.