Affective Audio Synthesis for Sound Experience Enhancement

With the advances of technology, multimedia tend to be a recurring and prominent component in almost all forms of communication. Although their content spans in various categories, there are two protuberant channels that are used for information conveyance, i.e. audio and visual. The former can transfer numerous content, ranging from low-level characteristics (e.g. spatial location of source and type of sound producing mechanism) to high and contextual (e.g. emotion). Additionally, recent results of published works depict the possibility for automated synthesis of sounds, e.g. music and sound events. Based on the above, in this chapter the authors propose the integration of emotion recognition from sound with automated synthesis techniques. Such a task will enhance, on one hand, the process of computer driven creation of sound content by adding an anthropocentric factor (i.e. emotion) and, on the other, the experience of the multimedia user by offering an extra constituent that will intensify the immersion and the overall user experience level.

[1]  K. L. Shunmuganathan,et al.  Automatic emotion recognition in video , 2014, 2014 International Conference on Green Computing Communication and Electrical Engineering (ICGCCEE).

[2]  Chung-Hsien Wu,et al.  Emotion recognition from text using semantic labels and separable mixture models , 2006, TALIP.

[3]  Lee Spector,et al.  SELECTION SONGS: EVOLUTIONARY MUSIC COMPUTATION , 2005 .

[4]  Chao Chen,et al.  A Web-Based Multimedia Retrieval System with MCA-Based Filtering and Subspace-Based Learning Algorithms , 2013, Int. J. Multim. Data Eng. Manag..

[5]  Lee Spector,et al.  Criticism, Culture, and the Automatic Generation of Artworks , 1994, AAAI.

[6]  Bin Chen,et al.  Emotion Recognition in Text for 3-D Facial Expression Rendering , 2010, IEEE Transactions on Multimedia.

[7]  K. Hevner Experimental studies of the elements of expression in music , 1936 .

[8]  Shrikanth S. Narayanan,et al.  Primitives-based evaluation and estimation of emotions in speech , 2007, Speech Commun..

[9]  Michael N. Vrahatis,et al.  10 Intelligent Music Composition , 2013 .

[10]  Ricardo A. Garcia Growing Sound Synthesizers using Evolutionary Methods , 2001 .

[11]  Youngmoo E. Kim,et al.  Feature selection for content-based, time-varying musical emotion regression , 2010, MIR '10.

[12]  J. Russell A circumplex model of affect. , 1980 .

[13]  Fuji Ren,et al.  Improving emotion recognition from text with fractionation training , 2010, Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010).

[14]  Rigas Kotsakis,et al.  Sound events and emotions: Investigating the relation of rhythmic characteristics and arousal , 2013, IISA 2013.

[15]  Zhen Wang,et al.  Emotional Music Generation Using Interactive Genetic Algorithm , 2008, 2008 International Conference on Computer Science and Software Engineering.

[16]  Shiliang Zhang,et al.  Affective MTV analysis based on arousal and valence features , 2008, 2008 IEEE International Conference on Multimedia and Expo.

[17]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[18]  Andreas Floros,et al.  BEADS: A dataset of Binaural Emotionally Annotated Digital Sounds , 2014, IISA 2014, The 5th International Conference on Information, Intelligence, Systems and Applications.

[19]  Wei-bang Chen,et al.  Color Image Segmentation: From the View of Projective Clustering , 2012, Int. J. Multim. Data Eng. Manag..

[20]  Michael N. Vrahatis,et al.  Genetic evolution of L and FL-systems for the production of rhythmic sequences , 2012, GECCO '12.

[21]  Michael O'Neill,et al.  Elevated Pitch: Automated Grammatical Evolution of Short Compositions , 2009, EvoWorkshops.

[22]  Jeffrey J. Scott,et al.  MUSIC EMOTION RECOGNITION: A STATE OF THE ART REVIEW , 2010 .

[23]  R. Storn,et al.  Differential Evolution - A simple and efficient adaptive scheme for global optimization over continuous spaces , 2004 .

[24]  Smith,et al.  Physical audio signal processing : for virtual musical instruments and audio effects , 2010 .

[25]  Emmanuel Dellandréa,et al.  What is the best segment duration for music mood analysis ? , 2008, 2008 International Workshop on Content-Based Multimedia Indexing.

[26]  R. Adolphs Neural systems for recognizing emotion , 2002, Current Opinion in Neurobiology.

[27]  Virpi Roto,et al.  Understanding, scoping and defining user experience: a survey approach , 2009, CHI.

[28]  Piotr Synak,et al.  Extracting Emotions from Music Data , 2005, ISMIS.

[29]  Jared Keengwe,et al.  Relationships between Wireless Technology Investment and Organizational Performance , 2009 .

[30]  Manuel Cebrián,et al.  A simple genetic algorithm for music generation by means of algorithmic information theory , 2007, 2007 IEEE Congress on Evolutionary Computation.

[31]  K. Scherer,et al.  On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common , 2013, Front. Psychol..

[32]  Xin Luo,et al.  Encyclopedia of Multimedia Technology and Networking , 2008 .

[33]  Michael G. Epitropakis,et al.  Chaos and Music: from Time Series Analysis to Evolutionary Composition , 2013, Int. J. Bifurc. Chaos.

[34]  Lie Lu,et al.  Automatic mood detection and tracking of music audio signals , 2006, IEEE Transactions on Audio, Speech, and Language Processing.

[35]  Somnuk Phon-Amnuaisuk,et al.  Evolving Music Generation with SOM-Fitness Genetic Programming , 2009, EvoWorkshops.

[36]  Masayuki Numao,et al.  Modelling affective-based music compositional intelligence with the aid of ANS analyses , 2007, Knowl. Based Syst..

[37]  M. Bradley,et al.  Measuring emotion: the Self-Assessment Manikin and the Semantic Differential. , 1994, Journal of behavior therapy and experimental psychiatry.

[38]  S. Koelsch Towards a neural basis of music-evoked emotions , 2010, Trends in Cognitive Sciences.

[39]  Ricardo A. Garcia TOWARDS THE AUTOMATIC GENERATION OF SOUND SYNTHESIS TECHNIQUES: PREPARATORY STEPS , 2000 .

[40]  J. Sundberg,et al.  Overview of the KTH rule system for musical performance. , 2006 .

[41]  David Casacuberta,et al.  DJ el Niño: expressing synthetic emotions with music , 2004, AI & SOCIETY.

[42]  Rainer Storn,et al.  Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces , 1997, J. Glob. Optim..

[43]  A. Ortony,et al.  What's basic about basic emotions? , 1990, Psychological review.

[44]  Yi-Hsuan Yang,et al.  Mr. Emo: music retrieval in the emotion plane , 2008, ACM Multimedia.

[45]  Björn W. Schuller,et al.  Automatic recognition of emotion evoked by general sound events , 2012, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[46]  Michael N. Vrahatis,et al.  Intelligent Real-Time Music Accompaniment for Constraint-Free Improvisation , 2012, 2012 IEEE 24th International Conference on Tools with Artificial Intelligence.

[47]  Andreas Floros,et al.  Affective acoustic ecology: towards emotionally enhanced sound events , 2012, Audio Mostly Conference.

[48]  Michael G. Epitropakis,et al.  Controlling interactive evolution of 8-bit melodies with genetic programming , 2012, Soft Computing.

[49]  Una-May O'Reilly,et al.  An executable graph representation for evolutionary generative music , 2011, GECCO '11.

[50]  Chia-Hung Yeh,et al.  An efficient emotion detection scheme for popular music , 2009, 2009 IEEE International Symposium on Circuits and Systems.

[51]  P. Laukka,et al.  Communication of emotions in vocal expression and music performance: different channels, same code? , 2003, Psychological bulletin.

[52]  Randolph R. Cornelius THEORETICAL APPROACHES TO EMOTION , 2000 .