Cooing, Crying, and Babbling: A Link between Music and Prelinguistic Communication Michael Byrd, Casady Bowman, and Takashi Yamauchi (mybrd@neo.tamu.edu, casadyb@neo.tamu.edu, takashi-yamauchi@tamu.edu) Department of Psychology, Mail Stop 4235 Texas AM Language; Music Introduction Infants use a variety of vocal sounds, such as cooing, babbling, crying, and laughing, to express their emotions. Infants’ prelinguistic vocal communications are highly affective in the sense that they evoke specific emotions— happiness, frustration, anger, hunger, and/or joy—without conveying concrete ideas. In this sense, infants’ vocal communication parallels music. Music is highly affective; yet it is conceptually limited (Cross, 2005; Ross, 2009). The interaction between music and language has attracted much attention recently (Chen-Haffteck, 2011; Cross, 2001; Masataka, 2007). However, despite their similarities, little attention has been paid to the relationship between music and prelinguistic vocalizations (Chen-Haffteck, 2011; Cross, 2001; He, Hotson, & Trainor, 2007; Masataka, 2007). If music and language are highly related, what is the relationship between infants’ vocal communications such as babbling, and music? In the study described below, we analyze acoustic cues of infants’ vocalization and demonstrate that emotions created by prelinguistic vocalization can be explained to a large extent by the acoustic cues of sound that differentiate timbres of musical instruments, potentially implicating that the same mental processes are applied for the perception of musical timbres and that of infants’ vocalizations. The paper is organized as follows: we review related work examining the link between prelinguistic vocalization Infants begin life with the ability to make different sounds— first cooing and crying, then babbling. Next they form one word, and then two, followed by full sentences and speech. In the first ten months, infants progress from simple sounds that are not expressed in the phonetic alphabet, to babbling, which is an important step in infants learning how to speak (Gros-Louis, West, Goldstein, & King, 2006; Oller, 2000). Musical instruments and infants’ vocalizations both elicit emotional responses, while conveying little information on what the sender is trying to express. Music can have a very powerful effect on its listeners, as we all have a piece of music that will bring back emotions. Music can convey at least three universal emotions, happiness, sadness and fear (Fritz et al., 2009). These emotions are similar to the emotions expressed by infants with their limited sounds (Dessureau, Kurowski, & Thompson, 1998; Zeifman, 2001; Zeskind & Marshall, 1998). Both infants and music convey meaning without the use of words. Infants rely on their voices and non-verbal/non-word sounds to communicate and it is these sounds that inform the listener of how important and of what type of danger the infant is facing, such as being too cold, hungry or of being left alone (Dessureau et al., 1998; Zeifman 2001; Zeskind & Marshall, Across cultures, songs sung while playing with babies are fast, high in pitch, and contain exaggerated rhythmic accents, whereas lullabies are lower, slower and softer. Infants will use cues in both music and language to learn the rules of a culture. Motherese, a form of speech used by adults in interacting with infants, often consists of singing to infants using a musical, sing-song voice, that mimics babies’ cooing by using a higher pitch. An infant’s caregiver will use higher pitch when speaking to an infant, as it helps the infant learn and also draws their attention (Fernald In summary, research shows that there is a close link between infants’ vocal communication and music. This link is demonstrated through the babbling and cooing sounds used by infants’ to communicate, and also by mothers’ use of motherese to assist infant’s learning of language in a sing-song manner. Infants are able to use the same cues
[1]
Chao He,et al.
Mismatch Responses to Pitch Changes in Early Infancy
,
2007,
Journal of Cognitive Neuroscience.
[2]
Noam Chomsky,et al.
The faculty of language: what is it, who has it, and how did it evolve?
,
2002,
Science.
[3]
W. Strange.
Evolution of language.
,
1984,
JAMA.
[4]
I. Peretz,et al.
Universal Recognition of Three Basic Emotions in Music
,
2009,
Current Biology.
[5]
Petri Toiviainen,et al.
A Matlab Toolbox for Music Information Retrieval
,
2007,
GfKl.
[6]
O. Lartillot,et al.
A MATLAB TOOLBOX FOR MUSICAL FEATURE EXTRACTION FROM AUDIO
,
2007
.
[7]
Petri Toiviainen,et al.
MIR in Matlab (II): A Toolbox for Musical Feature Extraction from Audio
,
2007,
ISMIR.
[8]
P. Juslin,et al.
Cue Utilization in Communication of Emotion in Music Performance: Relating Performance to Perception Studies of Music Performance
,
2022
.
[9]
N. Masataka.
Music, evolution and language.
,
2007,
Developmental science.
[10]
M. Kenward,et al.
It's not what you play, it's how you play it: Timbre affects perception of emotion in music
,
2009,
Quarterly journal of experimental psychology.
[11]
H. Helmholtz,et al.
On the Sensations of Tone as a Physiological Basis for the Theory of Music
,
2005
.
[12]
Denise Brandão de Oliveira e Britto,et al.
The faculty of language
,
2007
.
[13]
Michael O'Neill,et al.
The Use of Mel-frequency Cepstral Coefficients in Musical Instrument Identification
,
2008,
ICMC.
[14]
Nicholas S. Thompson,et al.
A reassessment of the role of pitch and duration in adults' responses to infant crying.
,
1998
.
[15]
S. McAdams,et al.
Perception of timbral analogies.
,
1992,
Philosophical transactions of the Royal Society of London. Series B, Biological sciences.
[16]
A. Fernald,et al.
Intonation and communicative intent in mothers' speech to infants: is the melody the message?
,
1989,
Child development.
[17]
Beth Logan,et al.
Adaptive model-based speech enhancement
,
2001,
Speech Commun..
[18]
P. Juslin,et al.
Emotional Expression in Music Performance: Between the Performer's Intention and the Listener's Experience
,
1996
.
[19]
R. Plomp,et al.
Tonal consonance and critical bandwidth.
,
1965,
The Journal of the Acoustical Society of America.
[20]
I. Cross.
Music and meaning, ambiguity and evolution
,
2005
.
[21]
P. S. Zeskind,et al.
The Relation between Variations in Pitch and Maternal Perceptions of Infant Crying.
,
1988
.
[22]
P. Johnson-Laird,et al.
The language of emotions: An analysis of a semantic field
,
2013
.
[23]
S. Koelsch.
Neural substrates of processing syntax and semantics in music
,
2005,
Current Opinion in Neurobiology.
[24]
A. R. Chase,et al.
Music discriminations by carp (Cyprinus carpio)
,
2001
.
[25]
D. Oller.
The emergence of the speech capacity
,
2000
.
[26]
I. Cross.
Music, Mind and Evolution
,
2001
.
[27]
Michael Klingbeil,et al.
Software for spectral Analysis, Editing, and synthesis
,
2005,
ICMC.
[28]
Barry Ross.
Challenges facing theories of music and language co-evolution
,
2009
.
[29]
P. Ekman.
Are there basic emotions?
,
1992,
Psychological review.
[30]
Andrew P. King,et al.
Mothers provide differential feedback to infants' prelinguistic sounds
,
2006
.