Facial expression recognition using ear canal transfer function

In this study, we propose a new input method for mobile and wearable computing using facial expressions. Facial muscle movements induce physical deformation in the ear canal. Our system utilizes such characteristics and estimates facial expressions using the ear canal transfer function (ECTF). Herein, a user puts on earphones with an equipped microphone that can record an internal sound of the ear canal. The system transmits ultrasonic band-limited swept sine signals and acquires the ECTF by analyzing the response. An important novelty feature of our method is that it is easy to incorporate into a product because the speaker and the microphone are equipped with many hearables, which is technically advanced electronic in-ear-device designed for multiple purposes. We investigated the performance of our proposed method for 21 facial expressions with 11 participants. Moreover, we proposed a signal correction method that reduces positional errors caused by attaching/detaching the device. The evaluation results confirmed that the f-score was 40.2% for the uncorrected signal method and 62.5% for the corrected signal method. We also investigated the practical performance of six facial expressions and confirmed that the f-score was 74.4% for the uncorrected signal method and 90.0% for the corrected signal method. We found the ECTF can be used for recognizing facial expressions with high accuracy equivalent to other related work.

[1]  Atsushi Nishikawa,et al.  Earable TEMPO: A Novel, Hands-Free Input Device that Uses the Movement of the Tongue Measured with a Wearable Ear Sensor , 2018, Sensors.

[2]  Hitoshi Imaoka,et al.  Fast and accurate personal authentication using ear acoustics , 2016, 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA).

[3]  Tohru Yagi,et al.  Conductive rubber electrodes for earphone-based eye gesture input interface , 2014, Personal and Ubiquitous Computing.

[4]  Buntarou Shizuki,et al.  CanalSense: Face-Related Movement Recognition System based on Sensing Air Pressure in Ear Canals , 2017, UIST.

[5]  Tsutomu Terada,et al.  Gesture Recognition Method Based on Ultrasound Propagation in Body , 2016, MobiQuitous.

[6]  Eduardo López Gonzalo,et al.  Mel, linear, and antimel frequency cepstral coefficients in broad phonetic regions for telephone speaker recognition , 2009, INTERSPEECH.

[7]  Bodo Urban,et al.  EarFieldSensing: A Novel In-Ear Electric Field Sensing to Enrich Wearable Gesture Input through Facial Expressions , 2017, CHI.

[8]  Anton H. M. Akkermans,et al.  Acoustic Ear Recognition , 2006, ICB.

[9]  Thad Starner,et al.  Stick it in your ear: building an in-ear jaw movement sensor , 2015, UbiComp/ISWC Adjunct.

[10]  Matti Karjalainen,et al.  Modeling the External Ear Acoustics for Insert Headphone Usage , 2010 .

[11]  Tsukasa Ogasawara,et al.  Active bone-conducted sound sensing for wearable interfaces , 2011, UIST '11 Adjunct.

[12]  K Kuzume,et al.  Hands-free man-machine interface device using tooth-touch sound for disabled persons , 2006 .

[13]  Adiyan Mujibiya,et al.  The sound of touch: on-body touch and gesture sensing based on transdermal ultrasound propagation , 2013, ITS.

[14]  Yu Shi,et al.  SilentKey: A New Authentication Framework through Ultrasonic-based Lip Reading , 2018, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..