Grammar Argumented LSTM Neural Networks with Note-Level Encoding for Music Composition

Creating any aesthetically pleasing piece of art, like music, has been a long time dream for artificial intelligence research. Based on recent success of long-short term memory (LSTM) on sequence learning, we put forward a novel system to reflect the thinking pattern of a musician. For data representation, we propose a note-level encoding method, which enables our model to simulate how human composes and polishes music phrases. To avoid failure against music theory, we invent a novel method, grammar argumented (GA) method. It can teach machine basic composing principles. In this method, we propose three rules as argumented grammars and three metrics for evaluation of machine-made music. Results show that comparing to basic LSTM, grammar argumented model’s compositions have higher contents of diatonic scale notes, short pitch intervals, and chords.

[1]  Michael C. Mozer,et al.  Neural Network Music Composition by Prediction: Exploring the Benefits of Psychoacoustic Constraints and Multi-scale Processing , 1994, Connect. Sci..

[2]  Gary M. Rader,et al.  A method for composing simple traditional music by computer , 1974, CACM.

[3]  Jose David Fernández-Rodríguez,et al.  AI Methods in Algorithmic Composition: A Comprehensive Survey , 2013 .

[4]  Günther Palm,et al.  Artificial Neural Networks and Machine Learning – ICANN 2014 , 2014, Lecture Notes in Computer Science.

[5]  Judy A. Franklin,et al.  Recurrent Neural Networks for Music Computation , 2006, INFORMS J. Comput..

[6]  David Cope,et al.  Computer Modeling of Musical Intelligence in EMI , 1992 .

[7]  Yoshua Bengio,et al.  Modeling Temporal Dependencies in High-Dimensional Sequences: Application to Polyphonic Music Generation and Transcription , 2012, ICML.

[8]  Jun Zhu,et al.  Polyphonic Music Modelling with LSTM-RTRBM , 2015, ACM Multimedia.

[9]  Alex Graves,et al.  Generating Sequences With Recurrent Neural Networks , 2013, ArXiv.

[10]  Martín Abadi,et al.  TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems , 2016, ArXiv.

[11]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[12]  Peter M. Todd,et al.  A Connectionist Approach To Algorithmic Composition , 1989 .

[13]  Moray Allan,et al.  Harmonising Chorales in the Style of Johann Sebastian Bach , 2002 .

[14]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[15]  Kurt Thywissen,et al.  GeNotator: an environment for exploring the application of evolutionary techniques in computer-assisted composition , 1999, Organised Sound.

[16]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.

[17]  Koray Kavukcuoglu,et al.  Pixel Recurrent Neural Networks , 2016, ICML.

[18]  Kratarth Goel,et al.  Polyphonic Music Generation by Modeling Temporal Dependencies Using a RNN-DBN , 2014, ICANN.

[19]  Jürgen Schmidhuber,et al.  Finding temporal structure in music: blues improvisation with LSTM recurrent networks , 2002, Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing.

[20]  Leon A. Gatys,et al.  A Neural Algorithm of Artistic Style , 2015, ArXiv.

[21]  Alex Graves,et al.  DRAW: A Recurrent Neural Network For Image Generation , 2015, ICML.

[22]  D. Eck,et al.  Learning Musical Structure Directly from Sequences of Music , 2006 .