An integrated system for analysis-modification-resynthesis of singing

This work addresses the problem of modifying a recorded sung performance (as much as possible) automatically. Different levels of sound and performance analyses are integrated in a single framework, which allows to characterize the same performance from different points of view (acoustic parameters and musical structures). Based on these analyses it is then possible to perform musically relevant transformations and non-trivial audio effects, such as expressive variation, pitch correction and vibrato processing. The system should be seen as an environment for the development and application of complex transformation algorithms on sung performances, without the need to manually analyze, process and re-synthesize the sound. The sound analysis and transformation sections are based on the sinusoidal+residual model of sound. In order to achieve faster analysis, high quality processing and automation enhancements, this technique has been improved (at least in the analysis part) by exploiting the quasi harmonic and monophonic nature of the considered class of signals.