A smartphone-based digital hearing aid to mitigate hearing loss at specific frequencies

Hearing Loss is one of the three most common chronic conditions among the elderly. In many cases, an individuals hearing is only impaired at certain (not all) frequencies. Analog hearing aids boost all sound frequencies equally including frequencies in which the individuals hearing is good, causing discomfort to the user. Digital hearing aids can amplify only the specific frequencies at which a persons hearing is impaired. In this paper, we describe the design, implementation and evaluation of a smartphone digital hearing aid app. Our digital hearing aid implementation has two parts: speech processing in the frequency domain and sound classification. We used Weighted Over-Lap Add (WOLA) filter bank to decompose microphone sounds into different frequency bands that are then amplified in the frequency domain. Mel-frequency cepstral coefficients (MFCC) of input sounds are computed and used as features for sound classification by the Gaussian Mixture Model (GMM) machine learning model. Our digital hearing aid app amplifies select frequency bands and correctly classifies speech in quiet and noisy environments. The results of a small user evaluation of our prototype are also promising.

[1]  Dima Ruinskiy,et al.  A Decision-Tree-Based Algorithm for Speech/Music Classification and Segmentation , 2009, EURASIP J. Audio Speech Music. Process..

[2]  S. Kochkin MarkeTrak VIII: 25-Year Trends in the Hearing Health Market , 2009 .

[3]  Douglas A. Reynolds,et al.  Gaussian Mixture Models , 2018, Encyclopedia of Biometrics.

[4]  Zhigang Liu,et al.  The Jigsaw continuous sensing engine for mobile phone applications , 2010, SenSys '10.

[5]  Henning Puder,et al.  Signal Processing in High-End Hearing Aids: State of the Art, Challenges, and Future Trends , 2005, EURASIP J. Adv. Signal Process..

[6]  Ulrike Goldschmidt Speech And Audio Processing In Adverse Environments , 2016 .

[7]  Wei Pan,et al.  SoundSense: scalable sound sensing for people-centric applications on mobile phones , 2009, MobiSys '09.

[8]  Robert Bregovic,et al.  Multirate Systems and Filter Banks , 2002 .

[9]  James M. Kates,et al.  Digital hearing aids. , 2008, Harvard health letter.

[10]  Björn Hartmann,et al.  How's my mood and stress?: an efficient speech analysis library for unobtrusive monitoring on mobile phones , 2011, BODYNETS.

[11]  Emre Ertin,et al.  mConverse: inferring conversation episodes from respiratory measurements collected in the field , 2011, Wireless Health.

[12]  Tao Zhang,et al.  Evaluation of sound classification algorithms for hearing aid applications , 2010, 2010 IEEE International Conference on Acoustics, Speech and Signal Processing.

[13]  Josef Psutka,et al.  Speech production based on the mel-frequency cepstral coefficients , 1999, EUROSPEECH.

[14]  Etienne Cornu,et al.  Low-power implementation of an HMM-based sound environment classification algorithm for hearing aid application , 2007, 2007 15th European Signal Processing Conference.

[15]  Robert E. Sandlin Textbook of hearing aid amplification , 2000 .

[16]  Enrique Alexandre,et al.  Speech/music/noise classification in hearing aids using a two-layer classification system with MSE linear discriminants , 2008, 2008 16th European Signal Processing Conference.

[17]  Hojung Cha,et al.  Automatically characterizing places with opportunistic crowdsensing using smartphones , 2012, UbiComp.