Ftm - Complex Data Structures for Max

This article presents FTM, a shared library and a set of modules extending the Max/MSP environment. It also gives a brief description of additional sets of modules based upon FTM. The article particularly addresses the community of researchers and musicians familiar with Max or Max-like programming environments such as Pure Data. FTM extends the signal and message data flow paradigm of Max permitting the representation and processing of complex data structures such as matrices, sequences or dictionaries as well as tuples, MIDI events or score elements (notes, silences, trills etc.). The consequent integration of references to complex data structures in the Max/MSP data flow opens new possibilities to the user in terms of powerful and efficient data representations and modularization of applications. FTM is the basis of several sets of modules for Max/MSP specialized on score following, sound analysis/re-synthesis, statistical modeling and data bank access. Designed for particular applications in automatic accompaniment, advanced sound processing and gestural analysis, the libraries use a common set of basic FTM data structures. They are perfectly interoperable while smoothly integrating into the modular programming paradigm of the host environment Max/MSP.

[1]  Matthew Wright,et al.  Extensions and Applications of the SDIF Sound Description Interchange Format , 2000, ICMC.

[2]  Michael Good,et al.  Extensible marckup language (XML) for music applications: an introduction , 2001 .

[3]  Barry Truax,et al.  Real-Time Granular Synthesis with a Digital Signal Processor , 1988 .

[4]  Miller S. Puckette,et al.  Combining Event and Signal Processing in the MAX Graphical Programming Environment , 1991 .

[5]  James A. Gosling,et al.  The java language environment: a white paper , 1995 .

[6]  Norbert Schnell,et al.  MnM: a Max/MSP mapping toolbox , 2005, NIME.

[7]  Nicola Orio,et al.  Score Following: State of the Art and New Developments , 2003, NIME.

[8]  Henry McGilton,et al.  The JavaTM Language Environment , 1998 .

[9]  P. Depalle,et al.  Spectral Envelopes and Inverse FFT Synthesis , 1992 .

[10]  Xavier Rodet,et al.  Spectral Envelope Estimation and Representation for Sound Analysis-Synthesis , 1999, ICMC.

[11]  Miller S. Puckette,et al.  FTS: A Real-Time Monitor for Multiprocessor Music Synthesis , 1991 .

[12]  J. L. Flanagan,et al.  PHASE VOCODER , 2008 .

[13]  Miller Puckette,et al.  Pure Data , 1997, ICMC.

[14]  D. Gabor Acoustical Quanta and the Theory of Hearing , 1947, Nature.

[15]  Marcelo M. Wanderley,et al.  ESCHER-modeling and performing composed instruments in real-time , 1998, SMC'98 Conference Proceedings. 1998 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.98CH36218).

[16]  Diemo Schwarz New Developments in Data-Driven Concatenative Sound Synthesis , 2003, ICMC.

[17]  Nicola Orio,et al.  Score Following Using Spectral Analysis and Hidden Markov Models , 2001, ICMC.

[18]  Xavier Rodet,et al.  Synthesizing a choir in real-time using Pitch Synchronous Overlap Add (PSOLA) , 2000, ICMC.

[19]  Matthew Wright,et al.  Open SoundControl: A New Protocol for Communicating with Sound Synthesizers , 1997, ICMC.

[20]  Jean Laroche,et al.  New phase-vocoder techniques for pitch-shifting, harmonizing and other exotic effects , 1999, Proceedings of the 1999 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. WASPAA'99 (Cat. No.99TH8452).

[21]  Norbert Schnell,et al.  jMax: A New JAVA-based Editing and Control System for Real-time Musical Applications , 1998, ICMC.

[22]  Jean Laroche,et al.  New Phase-Vocoder Techniques are Real-Time Pitch Shifting, Chorusing, Harmonizing, and Other Exotic Audio Modifications , 1999 .