A segmentation algorithm for zebra finch song at the note level

Songbirds have been widely used as a model for studying neuronal circuits that relate to vocal learning and production. An important component of this research relies on quantitative methods for characterizing song acoustics. Song in zebra finches-the most commonly studied songbird species-consists of a sequence of notes, defined as acoustically distinct segments in the song spectrogram. Here, we present an algorithm that exploits the correspondence between note boundaries and rapid changes in overall sound energy to perform an initial automated segmentation of song. The algorithm uses linear fits to short segments of the amplitude envelope to detect sudden changes in song signal amplitude. A variable detection threshold based on average power greatly improves the performance of the algorithm. Automated boundaries and those picked by human observers agree to within 8ms for >83% of boundaries.