Using Music to Interact with a Virtual Character

We present a real-time system which allows musicians to interact with synthetic virtual characters as they perform. Using Max/MSP to parameterize keyboard and vocal input, meaningful features (pitch, amplitude, chord information, and vocal timbre) are extracted from live performance in real-time. These extracted musical features are then mapped to character behaviour in such a way that the musician's performance elicits a response from the virtual character. The system uses the ANIMUS framework to generate believable character expressions. Experimental results are presented for simple characters.