Reproducing Musicality: Immediate Human-like Musicality Through Machine Learning and Passing the Turing Test

Musicology is a growing focus in computer science. Past research has had success in automatically generating music through learning-based agents that make use of neural networks and through model and rule-based approaches. These methods require a significant amount of information, either in the form of a large dataset for learning or a comprehensive set of rules based on musical concepts. This paper explores a model in which a minimal amount of musical information is needed to compose a desired style of music. This paper takes from two concepts, objectness, and evolutionary computation. The concept of objectness, an idea directly derived from imagery and pattern recognition, was used to extract specific musical objects from single musical inputs which are then used as the foundation to algorithmically produce musical pieces that are similar in style to the original inputs. These musical pieces are the product of evolutionary algorithms which implement a sequential evolution approach wherein a generated output may or may not yet be fully within the fitness thresholds of the input pieces. This method eliminates the need for a large amount of pre-provided data as well as the need for long processing times that are commonly associated with machine-learned art-pieces. This study aims to show a proof of concept of the implementation of the described model.

[1]  D. Cope Virtual Music: Computer Synthesis of Musical Style , 2001 .

[2]  Ichiro Fujinaga,et al.  jSymbolic: A Feature Extractor for MIDI Files , 2006, ICMC.

[3]  Thierry Lavoie,et al.  An accurate estimation of the Levenshtein distance using metric trees and Manhattan distance , 2012, 2012 6th International Workshop on Software Clones (IWSC).

[4]  Charu C. Aggarwal,et al.  On the Surprising Behavior of Distance Metrics in High Dimensional Spaces , 2001, ICDT.

[5]  Kenneth Mark Colby,et al.  Turing-like Indistinguishability Tests for the Calidation of a Computer Simulation of Paranoid Processes , 1972, Artif. Intell..

[6]  Thomas Deselaers,et al.  Measuring the Objectness of Image Windows , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  Ichiro Fujinaga,et al.  JSYMBOLIC 2.2: Extracting Features from Symbolic Music for use in Musicological and MIR Research , 2018, ISMIR.

[8]  Nakornthip Prompoon,et al.  A music similarity measure based on chord progression and song segmentation analysis , 2014, 2014 Fourth International Conference on Digital Information and Communication Technology and its Applications (DICTAP).

[9]  Dan Gusfield Algorithms on Strings, Trees, and Sequences: Linear-Time Construction of Suffix Trees , 1997 .

[10]  Charles Kronengold accidents, hooks and theory , 2005, Popular Music.

[11]  Joel Chadabe,et al.  A Turing Test for "Musical Intelligence"? , 1988 .

[12]  Ming Zhang,et al.  Salient object detection via effective background prior and novel graph , 2020, Multimedia Tools and Applications.

[13]  Gary Burns,et al.  A typology of ‘hooks’ in popular records , 1987, Popular Music.

[14]  M. Bennamoun,et al.  Automatic object detection using objectness measure , 2013, 2013 1st International Conference on Communications, Signal Processing, and their Applications (ICCSPA).