Stream segregation with high spatial acuity.

Spatial hearing is widely regarded as helpful in recognizing a sound amid other competing sounds. It is a matter of debate, however, whether spatial cues contribute to "stream segregation," which refers to the specific task of assigning multiple interleaved sequences of sounds to their respective sources. The present study employed "rhythmic masking release" as a measure of the spatial acuity of stream segregation. Listeners discriminated between rhythms of noise-burst sequences presented from free-field targets in the presence of interleaved maskers that varied in location. For broadband sounds in the horizontal plane, target-masker separations of ≥8° permitted rhythm discrimination with d' ≥ 1; in some cases, such thresholds approached listeners' minimum audible angles. Thresholds were the same for low-frequency sounds but were substantially wider for high-frequency sounds, suggesting that interaural delays provided higher spatial acuity in this task than did interaural level differences. In the vertical midline, performance varied dramatically as a function of noise-burst duration with median thresholds ranging from >30° for 10-ms bursts to 7.1° for 40-ms bursts. A marked dissociation between minimum audible angles and masking release thresholds across the various pass-band and burst-duration conditions suggests that location discrimination and spatial stream segregation are mediated by distinct auditory mechanisms.

[1]  Andrew J. King,et al.  Cortical Representation of Auditory Space , 2011 .

[2]  J. Licklider,et al.  On the Frequency Limits of Binaural Beats , 1950 .

[3]  Virginia Best,et al.  Stimulus factors influencing spatial release from speech-on-speech masking. , 2010, The Journal of the Acoustical Society of America.

[4]  John F Culling,et al.  The spatial unmasking of speech: evidence for within-channel processing of interaural time delay. , 2005, The Journal of the Acoustical Society of America.

[5]  C Trahiotis,et al.  Lateralization of low-frequency tones: relative potency of gating and ongoing interaural delays. , 1991, The Journal of the Acoustical Society of America.

[6]  G. F. Kuhn Model for the interaural time differences in the azimuthal plane , 1977 .

[7]  F. Wightman,et al.  The dominant role of low-frequency interaural time differences in sound localization. , 1992, The Journal of the Acoustical Society of America.

[8]  F L Wightman,et al.  Detectability of varying interaural temporal differences. , 1978, The Journal of the Acoustical Society of America.

[9]  Ruth Y Litovsky,et al.  Spatial Hearing and Speech Intelligibility in Bilateral Cochlear Implant Users , 2009, Ear and hearing.

[10]  A. Oxenham,et al.  Objective and Subjective Psychophysical Measures of Auditory Stream Integration and Segregation , 2010, Journal of the Association for Research in Otolaryngology.

[11]  D. M. Green,et al.  Directional dependence of interaural envelope delays. , 1990, The Journal of the Acoustical Society of America.

[12]  J. C. Middlebrooks,et al.  Localization of brief sounds: effects of level and background noise. , 2000, The Journal of the Acoustical Society of America.

[13]  P. Bailey,et al.  Some characteristics of auditory spatial attention revealed using rhythmic masking release , 2004, Perception & psychophysics.

[14]  W M Hartmann,et al.  Auditory spectral discrimination and the localization of clicks in the sagittal plane. , 1993, The Journal of the Acoustical Society of America.

[15]  Stephen G Lomber,et al.  Cortical control of sound localization in the cat: unilateral cooling deactivation of 19 cerebral areas. , 2004, Journal of neurophysiology.

[16]  D. M. Green,et al.  Characterization of external ear impulse responses using Golay codes. , 1992, The Journal of the Acoustical Society of America.

[17]  E. C. Cherry Some Experiments on the Recognition of Speech, with One and with Two Ears , 1953 .

[18]  A. Mills Lateralization of High‐Frequency Tones , 1960 .

[19]  Barbara Shinn-Cunningham,et al.  Spatial release from energetic and informational masking in a selective speech identification task. , 2008, The Journal of the Acoustical Society of America.

[20]  Douglas Johnson,et al.  Stream Segregation and Peripheral Channeling , 1991 .

[21]  Ruth Y Litovsky,et al.  The role of head-induced interaural time and level differences in the speech reception threshold for multiple interfering sound sources. , 2004, The Journal of the Acoustical Society of America.

[22]  Fan-Gang Zeng,et al.  Temporal pitch in electric hearing , 2002, Hearing Research.

[23]  L. Rayleigh,et al.  XII. On our perception of sound direction , 1907 .

[24]  John C Middlebrooks,et al.  Selective Electrical Stimulation of the Auditory Nerve Activates a Pathway Specialized for High Temporal Acuity , 2010, The Journal of Neuroscience.

[25]  B. Shinn-Cunningham,et al.  Influences of spatial cues on grouping and understanding sound , 2005 .

[26]  William M. Rabinowitz,et al.  Better speech recognition with cochlear implants , 1991, Nature.

[27]  J. C. Middlebrooks,et al.  Listener weighting of cues for lateral angle: the duplex theory of sound localization revisited. , 2002, The Journal of the Acoustical Society of America.

[28]  J. C. Middlebrooks,et al.  Location Coding by Opponent Neural Populations in the Auditory Cortex , 2005, PLoS biology.

[29]  D. M. Green,et al.  Sound localization by human listeners. , 1991, Annual review of psychology.

[30]  D. M. Green,et al.  Signal detection theory and psychophysics , 1966 .

[31]  J Blauert,et al.  On the lag of lateralization caused by interaural time and intensity differences. , 1972, Audiology : official organ of the International Society of Audiology.

[32]  Gerald Kidd,et al.  Tuning in the spatial dimension: evidence from a masked speech identification task. , 2008, The Journal of the Acoustical Society of America.

[33]  Frederic L. Wightman,et al.  Detectability of varying interaural temporal differencesa) , 1978 .

[34]  Martine Turgeon,et al.  Rhythmic masking release: contribution of cues for perceptual organization to the cross-spectral fusion of concurrent narrow-band noises. , 2002, The Journal of the Acoustical Society of America.

[35]  R. W. Hukin,et al.  Auditory objects of attention: the role of interaural time differences. , 1999, Journal of experimental psychology. Human perception and performance.

[36]  Markus Brunner,et al.  Speech and music perception with the new fine structure speech coding strategy: preliminary results , 2007, Acta oto-laryngologica.

[37]  Robert P. Carlyon,et al.  Extending the Limits of Place and Temporal Pitch Perception in Cochlear Implant Users , 2010, Journal of the Association for Research in Otolaryngology.

[38]  Stefan Koelsch,et al.  Spatial selective attention in a complex auditory environment such as polyphonic music. , 2010, The Journal of the Acoustical Society of America.

[39]  P M Hofman,et al.  Spectro-temporal factors in two-dimensional human sound localization. , 1998, The Journal of the Acoustical Society of America.

[40]  Albert S. Bregman,et al.  The Auditory Scene. (Book Reviews: Auditory Scene Analysis. The Perceptual Organization of Sound.) , 1990 .

[41]  E. Shaw Transformation of sound pressure level from the free field to the eardrum in the horizontal plane. , 1974, The Journal of the Acoustical Society of America.

[42]  Virginia Best,et al.  Listening to every other word: examining the strength of linkage variables in forming streams of speech. , 2008, The Journal of the Acoustical Society of America.

[43]  D. P. Phillips,et al.  A perceptual architecture for sound lateralization in man , 2008, Hearing Research.

[44]  B. Moore,et al.  Sequential streaming due to manipulation of interaural time differences. , 2011, The Journal of the Acoustical Society of America.

[45]  R. Tyler,et al.  Speech perception, localization, and lateralization with bilateral cochlear implants. , 2003, The Journal of the Acoustical Society of America.