Real Time Text-to-Emotion Engine for Expressive Internet Communications

In this work we focus on demonstrating a real time communication interface which enhances text communication by detecting from real time typed text, the extracted emotions, and displaying on the screen appropriate facial expression images in real time. The displayed expressions are represented in terms of expressive images or sketches of the communicating persons. This interface makes uses of a developed real time emotion extraction engine from text. The emotion extraction engine and extraction rules are discussed together with a description of the interface, its limits and future direction of such interface. The extracted emotions are mapped into displayed facial expressions. Such interface can be used as a platform for a number of future CMC experiments. The developed online communication interface brings together remotely located collaborating parties in a shared electronic space for their communication. In its current state the interface allows the participant to see at a glance all other online participants and all those who are engaged in communications. An important aspect of the interface is that for two users engaged in communication, the interface locally extracts emotional states from the content of typed textual sentences automatically. Subsequently it displays discrete expressions mapped from extracted emotions to the remote screen of the other person. It also analyses/extracts the intensity/duration of the emotional state. At the same time the users can also control their expression, if they wish, manually. The interface also uses text to speech synthesis, which allows the user to glance on other tasks while at the same time listening to the communication. A shared whiteboard also allows the users to engage in collaborative work. Finally it is also possible to view your own expression (feedback) which is displayed and viewed by the other user, an add on feature not possible with face to face communication between two people.