A Modular Sound Descriptor Analysis Framework For Relaxed-Real-Time Applications

Audio descriptor analysis in real-time or in batch is increasingly important for advanced sound processing, synthesis, and research, often using a relaxed-real-time approach. Existing approaches mostly lack either modularity or flexibility, since the design of an efficient modular descriptor analysis framework for commonly used real-time environments is non-trivial. We first lay out the requirements for such a framework before describing our modular architecture that integrates instantaneous descriptors, segmentation modules, temporal modeling, converter modules, and external descriptors or meta-data. We present its proof of concept implementation in MAX/MSP and examples that show the simplicity with which new analysis modules can be written.