Development of Visualizing Earphone and Hearing Glasses for Human Augmented Cognition

In this paper, we propose a human augmented cognition system which is realized by a visualizing earphone and a hearing glasses. The visualizing earphone using two cameras and a headphone set in a pair of glasses intreprets both human's intention and outward visual surroundings, and translates visual information into an audio signal. The hearing glasses catch a sound signal such as human voices, and not only finds the direction of sound sources but also recognizes human speech signals. Then, it converts audio information into visual context and displays the converted visual information in a head mounted display device. The proposed two systems includes incremental feature extraction, object selection and sound localization based on selective attention, face, object and speech recogntion algorithms. The experimental results show that the developed systems can expand the limited capacity of human cognition such as memory, inference and decision.

[1]  Minho Lee,et al.  Improving AdaBoost Based Face Detection Using Face-Color Preferable Selective Attention , 2008, IDEAL.

[2]  Matthew B. Blaschko,et al.  Combining Local and Global Image Features for Object Class Recognition , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops.

[3]  Rajesh P. N. Rao,et al.  Control of a humanoid robot by a noninvasive brain–computer interface in humans , 2008, Journal of neural engineering.

[4]  S. Rickard,et al.  DOA estimation of many W-disjoint orthogonal sources from two mixtures using DUET , 2000, Proceedings of the Tenth IEEE Workshop on Statistical Signal and Array Processing (Cat. No.00TH8496).

[5]  Minho Lee,et al.  Stereo saliency map considering affective factors and selective motion analysis in a dynamic environment , 2008, Neural Networks.

[6]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.

[7]  L Sirovich,et al.  Low-dimensional procedure for the characterization of human faces. , 1987, Journal of the Optical Society of America. A, Optics and image science.

[8]  Lawrence Sirovich,et al.  Application of the Karhunen-Loeve Procedure for the Characterization of Human Faces , 1990, IEEE Trans. Pattern Anal. Mach. Intell..

[9]  A. Nait-Ali,et al.  An adaptive calibration of an infrared light device used for gaze tracking , 2002, IMTC/2002. Proceedings of the 19th IEEE Instrumentation and Measurement Technology Conference (IEEE Cat. No.00CH37276).

[10]  Minho Lee,et al.  Incremental two-dimensional two-directional principal component analysis (I(2D)2PCA) for face recognition , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[11]  João Paulo da Silva Cunha,et al.  Towards a human-robot interface based on the electrical activity of the brain , 2008, Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots.