Intelligent Assistive Exoskeleton with Vision Based Interface

This paper presents an intelligent assistive robotic system for people suffering from myopathy. In this context, we are developing a 4 DoF assistive exoskeletal orthosis for the upper limb. A special attention is made toward Human Machine Interaction (HMI). We propose the use of visual sensing as interface able to convert user head gesture and mouth expression into a suitable control command. In that way, a non-intrusive cameras control is particularly adapted to disabled people. Moreover, we propose to robustify the command with a visual context analysis component. In this paper, we will first describe the problematic and the designed mechanical system. Next, we will describe the two approaches developed for visual sensing interface: head control and mouth expression control. Finally, we introduce the context detection for scene understanding.

[1]  Michael J. Lyons,et al.  Designing, Playing, and Performing with a Vision-based Mouth Interface , 2003, NIME.

[2]  S-C Chen,et al.  A head orientated wheelchair for people with disabilities , 2003, Disability and rehabilitation.

[3]  Paul A. Viola,et al.  Robust Real-Time Face Detection , 2001, International Journal of Computer Vision.

[4]  Larry S. Davis,et al.  Model-Based Object Pose in 25 Lines of Code , 1992, ECCV.

[5]  Patrick J. Flynn,et al.  A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition , 2006, Comput. Vis. Image Underst..

[6]  Janne Heikkilä,et al.  A real-time system for monitoring of cyclists and pedestrians , 2004, Image Vis. Comput..

[7]  Sheredos Sj,et al.  Ultrasonic head controller for powered wheelchairs. , 1995 .

[8]  Trevor Darrell,et al.  Head gesture recognition in intelligent interfaces: the role of context in improving recognition , 2006, IUI '06.

[9]  Alice Caplier,et al.  Accurate and quasi-automatic lip tracking , 2004, IEEE Transactions on Circuits and Systems for Video Technology.

[10]  David L. Jaffe,et al.  An ultrasonic head position interface for wheelchair control , 1982, Journal of Medical Systems.

[11]  S. Sridharan,et al.  A syntactic approach to automatic lip feature extraction for speaker identification , 1998, Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181).

[12]  Maja Pantic,et al.  A hybrid approach to mouth features detection , 2001, 2001 IEEE International Conference on Systems, Man and Cybernetics. e-Systems and e-Man for Cybernetics in Cyberspace (Cat.No.01CH37236).