Realtime Dynamic Multimedia Storyline Based on Online Audience Biometric Information

Audience complete action immersion sensation is still the ultimate goal of the multimedia industry. In spite of the significant technical audiovisual advances that enable more realistic contents, coping with individual audience needs and desires is still an incomplete achievement. The proposed project’s intention is to contribute for solving this issue through enabling real-time dynamic multimedia storylines with emotional subconscious audience interaction. Individual emotional state assessment is accomplished by direct access to online biometric information. Recent technologic breakthroughs have enabled the usage of minimal invasive biometric hardware devices that no longer interfere with the audience immersion feeling. Other key module of the project is the conceptualization of a dynamic storyline multimedia content system with emotional metadata, responsible for enabling discrete or continuous storyline route options. The unifying component is the definition of the full-duplex communication protocol. The current stage of research has already produced a spin-off product capable of providing computer mouse control through electromyography and has identified key factors in human emotions through experiments conducted in the developed system’s architecture that have enabled semi-automatic emotion assessment.

[1]  N. V. Lotova,et al.  Nonlinear Forecasting Measurements of the Human EEG During Evoked Emotions , 2004, Brain Topography.

[2]  W J Ray,et al.  EEG activity and heart rate during recall of emotional events in hypnosis: relationships with hypnotizability and suggestibility. , 1998, International journal of psychophysiology : official journal of the International Organization of Psychophysiology.

[3]  Steffen Staab,et al.  Semantic Annotation of Images and Videos for Multimedia Analysis , 2005, ESWC.

[4]  Jane Hunter,et al.  Adding Multimedia to the Semantic Web: Building an MPEG-7 ontology , 2001, SWWS.

[5]  Arthur Stutt,et al.  Engineering Knowledge in the Age of the Semantic Web , 2004, Lecture Notes in Computer Science.

[6]  Raphaël Troncy,et al.  Integrating Structure and Semantics into Audio-visual Documents , 2003, SEMWEB.

[7]  Masafumi Hagiwara,et al.  A feeling estimation system using a simple electroencephalograph , 2003, SMC'03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme - System Security and Assurance (Cat. No.03CH37483).

[8]  A Värri,et al.  A simple format for exchange of digitized polygraphic recordings. , 1992, Electroencephalography and clinical neurophysiology.

[9]  Guillaume Chanel,et al.  Emotion Assessment: Arousal Evaluation Using EEG's and Peripheral Physiological Signals , 2006, MRCS.

[10]  John Mylopoulos,et al.  The Semantic Web - ISWC 2003 , 2003, Lecture Notes in Computer Science.

[11]  Luís Paulo Reis,et al.  General-Purpose Emotion Assessment Testbed Based on Biometric Information , 2008, New Directions in Intelligent Interactive Multimedia.

[12]  Ana Paiva,et al.  MAgentA: An Architecture for Real Time Automatic Composition of Background Music , 2001, IVA.

[13]  ARTHUR G. MONEY,et al.  Automating the Extraction of Emotion-Related Multimedia Semantics , 2005 .

[14]  W. Marston,et al.  Systolic blood pressure symptoms of deception. , 1917 .

[15]  Kazuhiko Takahashi Remarks on Emotion Recognition from Bio-Potential Signals , 2004 .

[16]  Lora Aroyo,et al.  The Semantic Web: Research and Applications , 2009, Lecture Notes in Computer Science.

[17]  Antonio Gomes,et al.  Mouse Control through Electromyography - Using Biosignals Towards New User Interface Paradigms , 2008, BIOSIGNALS.

[18]  G. Cascino,et al.  Subcortical Dementia , 1991, Neurology.

[19]  Werner Bailer,et al.  Detailed audiovisual profile: enabling interoperability between MPEG-7 based systems , 2006, 2006 12th International Multi-Media Modelling Conference.

[20]  T. Pedley Current Practice of Clinical Electroenceph‐alography , 1980, Neurology.

[21]  Raphaël Troncy,et al.  Designing and Using an Audio-Visual Description Core Ontology , 2004 .