Exploring Melodic Motif to Support an Affect-Based Music Compositional Intelligence

Although the design of our constructive adaptive user interface (CAUI) for an affect-based music compositional artificial intelligence has been modified on several fronts since the time it was introduced, what has become a persisting limitation of our research is the extent by which it should efficiently cover music theory effectively. This paper reports our initial investigation on the possible significant contribution of melodic motif in creating compositions that are more fluent and cohesive. From an initial collection of 10 melodic motifs from different musical pieces, we provided heuristic-based renditions to these melodic motifs, four for each one, and obtained a total of 50 melodic motifs. We asked 10 subjects to provide self-annotations of the affective flavor of these motifs. We then represented these motifs as first-order logic predicates and employed inductive logic programming for the CAUI to learn relations of user affect perceptions and music features. To obtain new compositions, we first used a genetic algorithm with a fitness function that is based on the induced relations for the CAUI to generate chordal tone variants. We then used probabilistic modifications for the CAUI to alter these chordal tones to become non-harmonic tones. The CAUI composed 60 new user-specific affect-based musical pieces for each subject. Our results indicate that the compositions differ significantly for only one pair of affect type when the subject evaluations of the CAUI compositions were compared using paired t-test. However, when we compared the subject evaluations of the quality of the melodies and of the musical pieces from when melodic motif variants were not considered, the improvement is significant with t-values of 5.86 and 6.33, respectively, for a significance level of 0.01.

[1]  J. Nattiez Music and Discourse: Toward a Semiology of Music , 1991 .

[2]  Tillman Weyde,et al.  Integrating Segmentation and Similarity in Melodic Analysis , 2002 .

[3]  Masayuki Numao,et al.  Learning music arrangements and their evaluation based on psychological experiments , 2007 .

[4]  Hiroaki Satoh,et al.  Minimal generation gap model for GAs considering both exploration and exploitation , 1996 .

[5]  R. W. Massi,et al.  Music and Discourse: Toward a Semiology of Music , 1991 .

[6]  Masayuki Numao,et al.  Constructive adaptive user interfaces: composing music based on human feelings , 2002, AAAI/IAAI.

[7]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[8]  R. Mike Cameron-Jones,et al.  Induction of logic programs: FOIL and related systems , 1995, New Generation Computing.

[9]  Masayuki Numao,et al.  Acquisition of human feelings in music arrangement , 1997, IJCAI 1997.

[10]  Masayuki Numao,et al.  Music compositional intelligence with an affective flavor , 2007, IUI '07.

[11]  Mitsuo Nagamachi,et al.  Perspectives and the new trend of Kansei/affective engineering , 2008 .

[12]  Masayuki Numao,et al.  Modelling affective-based music compositional intelligence with the aid of ANS analyses , 2007, Knowl. Based Syst..

[13]  Isao Ono,et al.  Global and multi-objective optimization for lens design by real-coded genetic algorithms , 1998, Other Conferences.

[14]  Eisuke Kita,et al.  Genetic Algorithm Based on Schemata Theory , 2011 .

[15]  Somnuk Phon-Amnuaisuk,et al.  Evolutionary methods for musical composition , 1998 .