Interactive Composing as the Expression of Autonomous Machine Motivations

This paper documents a novel model supporting rewarding musical human-machine interaction based on the idea of mutual influence rather than the specification of explicit, scripted interaction protocols. A Biology inspired computational model is suggested containing networks for listening, playing and the unsupervised synthesis of autonomous machine motivations. Motivations are assembled from non-linear relationships that interpret external changes, implemented in the drive object. A drive keeps two competing motivations; (1) integration with a human-suggested context or (2) expression of a native character. A population of musical processing functions is evolved online as to offer musical expertise to fulfil the systems’ implicit goal i.e., integration or expression. The shifting musical distance between consecutive statements by human and machine is traced in time as to derive a fitness measure for the musical processing functions currently in use; the service they propose for attaining the goals implied in both basic motivations. Experiments show that man and machine may develop interesting interaction modes without any a priori specifications; the system develops a dynamic personality from the non-linear dynamics emanating from the networked architecture.