In this work we focus on demonstrating a real time communication interface which enhances text communication by detecting from real time typed text, the extracted emotions, and displaying on the screen appropriate facial expression images in real time. The displayed expressions are represented in terms of expressive images or sketches of the communicating persons. This interface makes uses of a developed real time emotion extraction engine from text. The emotion extraction engine and extraction rules are discussed together with a description of the interface, its limits and future direction of such interface. The extracted emotions are mapped into displayed facial expressions. Such interface can be used as a platform for a number of future CMC experiments. The developed online communication interface brings together remotely located collaborating parties in a shared electronic space for their communication. In its current state the interface allows the participant to see at a glance all other online participants and all those who are engaged in communications. An important aspect of the interface is that for two users engaged in communication, the interface locally extracts emotional states from the content of typed textual sentences automatically. Subsequently it displays discrete expressions mapped from extracted emotions to the remote screen of the other person. It also analyses/extracts the intensity/duration of the emotional state. At the same time the users can also control their expression, if they wish, manually. The interface also uses text to speech synthesis, which allows the user to glance on other tasks while at the same time listening to the communication. A shared whiteboard also allows the users to engage in collaborative work. Finally it is also possible to view your own expression (feedback) which is displayed and viewed by the other user, an add on feature not possible with face to face communication between two people.
[1]
W. Gordon,et al.
Saussure For Beginners
,
1996
.
[2]
Terry Winograd,et al.
Understanding natural language
,
1974
.
[3]
Ian Robinson,et al.
THE NEW GRAMMARIANS' FUNERAL
,
1977
.
[4]
Jody Kreiman,et al.
Hemispheric specialization for voice recognition: Evidence from dichotic listening
,
1988,
Brain and Language.
[5]
R. Elizabeth,et al.
Electropolis: Communication and community on internet relay chat
,
1991
.
[6]
Brian R. Taylor.
The pooling of unshared information during group discussion
,
1987
.
[7]
S. McConnell,et al.
Virtual conferencing
,
1997
.
[8]
Anthony C. Boucouvalas,et al.
Expressive real time communications
,
1999
.
[9]
Keith Waters,et al.
Computer facial animation
,
1996
.
[10]
Akikazu Takeuchi,et al.
Communicative facial displays as a new conversational modality
,
1993,
INTERCHI.
[11]
D Terzopoulos,et al.
The computer synthesis of expressive faces.
,
1992,
Philosophical transactions of the Royal Society of London. Series B, Biological sciences.
[12]
D. Jeffreys,et al.
Evoked potential evidence for human brain mechanisms that respond to single, fixated faces
,
2004,
Experimental Brain Research.
[13]
Jonathan Steuer,et al.
Defining virtual reality: dimensions determining telepresence
,
1992
.