Pulsed Melodic Processing - The Use of Melodies in Affective Computations for Increased Processing Transparency

Pulsed Melodic Processing (PMP) is a computation protocol that utilizes musically-based pulse sets (“melodies”) for processing – capable of representing the arousal and valence of affective states. Affective processing and affective input/output are key tools in artificial intelligence and computing. In the designing of processing elements (e.g. bits, bytes, floats, etc.), engineers have primarily focused on the processing efficiency and power. They then go on to investigate ways of making them perceivable by the user/engineer. However Human-Computer Interaction research – and the increasing pervasiveness of computation in our daily lives – supports a complementary approach in which computational efficiency and power are more balanced with understandability to the user/engineer. PMP allows a user to tap into the processing path to hear a sample of what is going on in that affective computation, as well as providing a simpler way to interface with affective input/output systems. This requires the developing of new approaches to processing and interfacing PMP-based modules. In this chapter we introduce PMP and examine the approach using three example: a military robot team simulation with an affective subsystem, a text affective-content estimation system, and a stock market tool.

[1]  Peter N. Marinos Fuzzy Logic and its Application to Switching Systems , 1969, IEEE Transactions on Computers.

[2]  C. Krumhansl,et al.  Tracing the dynamic changes in perceived tonal organization in a spatial representation of musical keys. , 1982, Psychological review.

[3]  L. F. Barrett,et al.  Handbook of Emotions , 1993 .

[4]  S. Hyakin,et al.  Neural Networks: A Comprehensive Foundation , 1994 .

[5]  Gregory Kramer,et al.  Auditory Display: Sonification, Audification, And Auditory Interfaces , 1994 .

[6]  L. Cosmides,et al.  To appear in: Evolutionary Psychology and the Emotions Handbook of Emotions, 2nd Edition , 2022 .

[7]  Anders Friberg,et al.  Emotional Coloring of Computer-Controlled Music Performances , 2000, Computer Music Journal.

[8]  Rosalind W. Picard Affective computing: challenges , 2003, Int. J. Hum. Comput. Stud..

[9]  Paul Vickers,et al.  Siren songs and swan songs debugging with music , 2003, CACM.

[10]  Vincent M. Stanford,et al.  Biosignals offer potential for direct interfaces and health monitoring , 2004, IEEE Pervasive Computing.

[11]  P. Juslin From mimesis to catharsis: Expression, perception, and induction of emotion in music , 2005 .

[12]  Patrik N. Juslin From mimesis to catharsis , 2005 .

[13]  L. Hall The language of music , 2006 .

[14]  Giacomo Indiveri,et al.  A VLSI array of low-power spiking neurons and bistable synapses with spike-timing dependent plasticity , 2006, IEEE Transactions on Neural Networks.

[15]  Andrew R. Brown,et al.  Controlling musical emotionality: an affective computational architecture for influencing musical emotions , 2007, Digit. Creativity.

[16]  Keigo Watanabe,et al.  Affection Based Multi-robot Team Work , 2008 .

[17]  E. Miranda,et al.  A survey of computer systems for expressive music performance , 2009, CSUR.

[18]  Kostas Karpouzis,et al.  Affective Intelligence: The Human Face of AI , 2009, Artificial Intelligence: An International Perspective.

[19]  Brendan Z. Allison,et al.  Brain-Computer Interfaces , 2010 .

[20]  Brian Y. Hwang,et al.  Brain-computer interfaces: military, neurosurgical, and ethical perspective. , 2010, Neurosurgical focus.

[21]  Alexis Kirke,et al.  Emergent Construction of melodic pitch and hierarchy through agents communicating emotion without melodic intelligence , 2011, ICMC.

[22]  Alexis Kirke,et al.  Towards using expressive performance algorithms for typist emotion detection , 2011, ICMC.