Sounds of the Human Vocal Tract

Previous research suggests that beatboxers only use sounds that exist in the world’s languages. This paper provides evidence to the contrary, showing that beatboxers use non-linguistic articulations and airstream mechanisms to produce many sound effects that have not been attested in any language. An analysis of real-time magnetic resonance videos of beatboxing reveals that beatboxers produce non-linguistic articulations such as ingressive retroflex trills and ingressive lateral bilabial trills. In addition, beatboxers can use both lingual egressive and pulmonic ingressive airstreams, neither of which have been reported in any language. The results of this study affect our understanding of the limits of the human vocal tract, and address questions about the mental units that encode music and phonological grammar.

[1]  Dan Stowell,et al.  Characteristics of the beatboxing vocal style , 2008 .

[2]  Shrikanth S. Narayanan,et al.  Comparison of Basic Beatboxing Articulations Between Expert and Novice Artists Using Real-Time Magnetic Resonance Imaging , 2017, INTERSPEECH.

[3]  Shrikanth Narayanan,et al.  Paralinguistic mechanisms of production in human "beatboxing": a real-time magnetic resonance imaging study. , 2013, The Journal of the Acoustical Society of America.

[4]  Shrikanth Narayanan,et al.  A fast and flexible MRI system for the study of dynamic vocal tract shaping , 2017, Magnetic resonance in medicine.

[5]  Yoon-Chul Kim,et al.  Seeing speech: Capturing vocal tract shaping using real-time magnetic resonance imaging [Exploratory DSP] , 2008, IEEE Signal Processing Magazine.

[6]  S. Moisik The Epilarynx in Speech , 2013 .

[7]  J. Sundberg Articulatory interpretation of the "singing formant". , 1974, The Journal of the Acoustical Society of America.

[8]  Shrikanth Narayanan,et al.  An approach to real-time magnetic resonance imaging for speech production. , 2003, The Journal of the Acoustical Society of America.

[9]  R. Eklund Pulmonic ingressive phonation: Diachronic and synchronic characteristics, distribution and function in animal and human sound production and in human speech , 2008, Journal of the International Phonetic Association.

[10]  Dan Stowell,et al.  Making music through real-time voice timbre analysis: machine learning and timbral control , 2010 .