In a realtime interactive work for live performer and computer, the immanently human musical expression of the live performer is not easily equalled by algorithmically generated artificial expression in the computer sound. In cases when we expect the computer to display interactivity in the context of improvisation, pre-programmed emulations of expressivity in the computer are often no match for the charisma of an experienced improviser. This article proposes to achieve expressivity in computer sound by “stealing” expressivity from the live performer. By capturing, analyzing, and storing expressive characteristics found in the audio signal received from the acoustic instrument, the computer can use those same characteristic expressive sound gestures, either verbatim or with modifications. This can lead to a more balanced sense of interactivity in works for live performer and computer.
[1]
Tristan Jehan,et al.
An Audio-Driven, Spectral Analysis-Based, Perceptual Synthesis Engine
,
2001
.
[2]
Miller Puckette,et al.
Pure Data
,
1997,
ICMC.
[3]
Miller Puckette,et al.
Real-time audio analysis tools for Pd and MSP
,
1998,
ICMC.
[4]
Hideki Kawahara,et al.
YIN, a fundamental frequency estimator for speech and music.
,
2002,
The Journal of the Acoustical Society of America.
[5]
David Zicarelli,et al.
An Extensible Real-time Signal Processing Environment for Max
,
1998,
ICMC.
[6]
Judith C. Brown,et al.
A high resolution fundamental frequency determination based on phase changes of the Fourier transform
,
1993
.