Technological Advances for Conducting a Virtual Ensemble

This paper describes recent advances in our Interactive Virtual Ensemble project. This project aims at developing the ability to simulate the response of a human performing ensemble to more-or-less standard conducting gestures. Over the past year we have added several new components to the system. The two areas where primary additions have been made are in tracking/recognition and in sound synthesis. The system now uses a wireless MotionStar tracker, and a distributed communication model. We are also using a hybrid beat detection and classification system incorporating some neural net processing for both beat prediction and classification. The sound synthesis component uses dynamic control of an analysis-based additive synthesis model.