Acoustic head orientation estimation applied to powered wheelchair control

In this paper, we propose an acoustic-based head orientation estimation method using a microphone array mounted on a wheelchair, and apply it to a novel interface for controlling a powered wheelchair. The proposed interface does not require disabled people to wear any microphones or utter recognizable voice commands. By mounting the microphone array system on the wheelchair, our system can easily distinguish user utterances from other voices without using a speaker identification technique. The proposed interface is also robust to interference from surrounding noise. From the experimental results, we confirm the feasibility and effectiveness of the proposed method.

[1]  G. E. Miller,et al.  Voice controller for wheelchairs , 2006, Medical and Biological Engineering and Computing.

[2]  Y. Matsumotot,et al.  Development of intelligent wheelchair system with face and gaze based interface , 2001, Proceedings 10th IEEE International Workshop on Robot and Human Interactive Communication. ROMAN 2001 (Cat. No.01TH8591).

[3]  Katsuhiko Sakaue,et al.  Head pose estimation by nonlinear manifold learning , 2004, ICPR 2004.

[4]  Alessio Brutti,et al.  Oriented global coherence field for the estimation of the head orientation in smart rooms equipped with distributed microphone arrays , 2005, INTERSPEECH.

[5]  Huosheng Hu,et al.  Head gesture recognition for hands-free control of an intelligent wheelchair , 2007, Ind. Robot.

[6]  R. O. Schmidt,et al.  Multiple emitter location and signal Parameter estimation , 1986 .

[7]  S. P. Levine,et al.  Voice control of a powered wheelchair , 2002, IEEE Transactions on Neural Systems and Rehabilitation Engineering.