Recognition and mapping of facial expressions to avatar by embedded photo reflective sensors in head mounted display

We propose a facial expression mapping technology between virtual avatars and Head-Mounted Display (HMD) users. HMD allow people to enjoy an immersive Virtual Reality (VR) experience. A virtual avatar can be a representative of the user in the virtual environment. However, the synchronization of the the virtual avatar's expressions with those of the HMD user is limited. The major problem of wearing an HMD is that a large portion of the user's face is occluded, making facial recognition difficult in an HMD-based virtual environment. To overcome this problem, we propose a facial expression mapping technology using retro-reflective photoelectric sensors. The sensors attached inside the HMD measures the distance between the sensors and the user's face. The distance values of five basic facial expressions (Neutral, Happy, Angry, Surprised, and Sad) are used for training the neural network to estimate the facial expression of a user. We achieved an overall accuracy of 88% in recognizing the facial expressions. Our system can also reproduce facial expression change in real-time through an existing avatar using regression. Consequently, our system enables estimation and reconstruction of facial expressions that correspond to the user's emotional changes.

[1]  Yoshua Bengio,et al.  Deep Sparse Rectifier Neural Networks , 2011, AISTATS.

[2]  Yuta Sugiura,et al.  SenSkin: adapting skin as a soft interface , 2013, UIST.

[3]  Tsutomu Terada,et al.  A smile/laughter recognition mechanism for smile-based life logging , 2013, AH.

[4]  Xavier P. Burgos-Artizzu,et al.  Real-time expression-sensitive HMD face reconstruction , 2015, SIGGRAPH Asia Technical Briefs.

[5]  Devi Arumugam,et al.  Emotion Classification Using Facial Expression , 2011 .

[6]  Justus Thies,et al.  Real-time expression transfer for facial reenactment , 2015, ACM Trans. Graph..

[7]  Andrew W. Fitzgibbon,et al.  Real-time non-rigid reconstruction using an RGB-D camera , 2014, ACM Trans. Graph..

[8]  Kyoung-Yun Kim,et al.  Emotional Intensity-based Facial Expression Cloning for Low Polygonal Applications , 2009, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[9]  Geoffrey E. Hinton,et al.  Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.

[10]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[11]  Veikko Surakka,et al.  Real-time estimation of emotional experiences from facial expressions , 2006, Interact. Comput..

[12]  Kenji Suzuki,et al.  Design of a Wearable Device for Reading Positive Expressions from Facial EMG Signals , 2014, IEEE Transactions on Affective Computing.

[13]  Christian Theobalt,et al.  Reconstructing detailed dynamic face geometry from monocular video , 2013, ACM Trans. Graph..

[14]  Geoffrey E. Hinton,et al.  Learning representations of back-propagation errors , 1986 .

[15]  Itoi Kiyoaki,et al.  Intelligent Coding of Facial Image Using Neural Network and Morphing , 2000 .

[16]  Joseph J. Lim,et al.  High-fidelity facial and speech animation for VR HMDs , 2016, ACM Trans. Graph..

[17]  Kai Kunze,et al.  Facial Expression Recognition in Daily Life by Embedded Photo Reflective Sensors on Smart Eyewear , 2016, IUI.

[18]  Takeo Kanade,et al.  Recognizing Action Units for Facial Expression Analysis , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[19]  Justus Thies,et al.  FaceVR: Real-Time Gaze-Aware Facial Reenactment in Virtual Reality , 2018, ACM Trans. Graph..

[20]  Yasue Mitsukura,et al.  Eye blink detection using monocular system and its applications , 2012, 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication.

[21]  Uwe Handmann,et al.  AAM based continuous facial expression recognition for face image sequences , 2011, 2011 IEEE 12th International Symposium on Computational Intelligence and Informatics (CINTI).

[22]  Rosalind W. Picard,et al.  Expression glasses: a wearable device for facial expression recognition , 1999, CHI Extended Abstracts.

[23]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[24]  Chongyang Ma,et al.  Facial performance sensing head-mounted display , 2015, ACM Trans. Graph..

[25]  Justus Thies,et al.  Demo of FaceVR: real-time facial reenactment and eye gaze control in virtual reality , 2016, SIGGRAPH Emerging Technologies.