Multimodal Interface for Effective Man Machine Interaction

Providing human–human way of interaction for man machine interaction is still a research challenge. It is widely believed that as the computing, communication, and display technologies progress even further, the existing HCI techniques may become a constraint in the effective utilization of the available information flow. Multimodal interaction provides the user with multiple modes of interfacing with a system beyond the traditional keyboard and mouse interactions. This article mainly discusses about the effectiveness of Multimodal Interaction for Man–Machine interaction and also discusses about the implementation issues in various platforms and media. The convergence of various input and output technologies will subsidize the difficulties of humans in communicating with machines thus make maximum use of the converged media platforms. This chapter addresses the implementation of a multimodal interface system through a case study. In addition to that we also discuss about certain challenging application areas where we require a solution of this kind.

[1]  Fumio Miyazaki,et al.  FAce MOUSe: A novel human-machine interface for controlling the position of a laparoscope , 2003, IEEE Trans. Robotics Autom..

[2]  Mohammed Yeasin,et al.  A real-time framework for natural multimodal interaction with large screen displays , 2002, Proceedings. Fourth IEEE International Conference on Multimodal Interfaces.

[3]  Thad Starner,et al.  American sign language recognition in game development for deaf children , 2006, Assets '06.

[4]  David Reitter,et al.  UI on the Fly: Generating a Multimodal User Interface , 2004, HLT-NAACL.

[5]  Rajeev Sharma,et al.  Designing a human-centered, multimodal GIS interface to support emergency management , 2002, GIS '02.

[6]  Ivan Marsic,et al.  A framework for rapid development of multimodal interfaces , 2003, ICMI '03.

[7]  Yoshiaki Shirai,et al.  Intelligent wheelchair remotely controlled by interactive gestures , 2000, Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.

[8]  Rashid Ansari,et al.  Multimodal human discourse: gesture and speech , 2002, TCHI.

[9]  Roberto Pieraccini,et al.  A multimodal conversational interface for a concept vehicle , 2004, INTERSPEECH.

[10]  Francis K. H. Quek The catchment feature model for multimodal language analysis , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[11]  Sharon L. Oviatt,et al.  Multimodal Interfaces: A Survey of Principles, Models and Frameworks , 2009, Human Machine Interaction.

[12]  Markku Turunen,et al.  Design of a rich multimodal interface for mobile spoken route guidance , 2007, INTERSPEECH.

[13]  R. Dillmann,et al.  Using gesture and speech control for commanding a robot assistant , 2002, Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication.