Flow Synthesizer: Universal Audio Synthesizer Control with Normalizing Flows

The ubiquity of sound synthesizers has reshaped modern music production, and novel music genres are now sometimes even entirely defined by their use. However, the increasing complexity and number of parameters in modern synthesizers make them extremely hard to master. Hence, the development of methods allowing to easily create and explore with synthesizers is a crucial need. Recently, we introduced a novel formulation of audio synthesizer control based on learning an organized latent audio space of the synthesizer’s capabilities, while constructing an invertible mapping to the space of its parameters. We showed that this formulation allows to simultaneously address automatic parameters inference, macro-control learning, and audio-based preset exploration within a single model. We showed that this formulation can be efficiently addressed by relying on Variational Auto-Encoders (VAE) and Normalizing Flows (NF). In this paper, we extend our results by evaluating our proposal on larger sets of parameters and show its superiority in both parameter inference and audio reconstruction against various baseline models. Furthermore, we introduce disentangling flows, which allow to learn the invertible mapping between two separate latent spaces, while steering the organization of some latent dimensions to match target variation factors by splitting the objective as partial density evaluation. We show that the model disentangles the major factors of audio variations as latent dimensions, which can be directly used as macro-parameters. We also show that our model is able to learn semantic controls of a synthesizer, while smoothly mapping to its parameters. Finally, we introduce an open-source implementation of our models inside a real-time Max4Live device that is readily available to evaluate creative applications of our proposal.

[1]  Iain Murray,et al.  Masked Autoregressive Flow for Density Estimation , 2017, NIPS.

[2]  Ricardo A. Garcia Growing Sound Synthesizers using Evolutionary Methods , 2001 .

[3]  Christopher Burgess,et al.  beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.

[4]  Bryan Pardo,et al.  SynthAssist: an audio synthesizer programmed with vocal imitation , 2014, ACM Multimedia.

[5]  Shakir Mohamed,et al.  Variational Inference with Normalizing Flows , 2015, ICML.

[6]  Adrien Bardet,et al.  Universal audio synthesizer control with normalizing flows , 2019, ArXiv.

[7]  Bernhard Schölkopf,et al.  Wasserstein Auto-Encoders , 2017, ICLR.

[8]  Ricardo A. Garcia,et al.  Automatic Design of Sound Synthesis Techniques by means of Genetic Programming , 2002 .

[9]  Matthew Yee-King,et al.  A Comparison of Parametric Optimization Techniques for Musical Instrument Tone Matching , 2011 .

[10]  Adrien Bitton,et al.  Generative timbre spaces with variational audio synthesis , 2018, ArXiv.

[11]  Ole Winther,et al.  How to Train Deep Variational Autoencoders and Probabilistic Ladder Networks , 2016, ICML 2016.

[12]  Max Welling,et al.  Improved Variational Inference with Inverse Autoregressive Flow , 2016, NIPS 2016.

[13]  Pieter Abbeel,et al.  Variational Lossy Autoencoder , 2016, ICLR.

[14]  Mark d'Inverno,et al.  Automatic Programming of VST Sound Synthesizers Using Deep Networks and Other Techniques , 2018, IEEE Transactions on Emerging Topics in Computational Intelligence.

[15]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[16]  Max Welling,et al.  Semi-supervised Learning with Deep Generative Models , 2014, NIPS.

[17]  Miller S. Puckette,et al.  The Theory and Technique of Electronic Music , 2007 .