A multimodal human-machine interface enabling situation-adaptive control inputs for highly automated vehicles

Intelligent vehicles operating in different levels of automation require the driver to fully or partially conduct the dynamic driving task (DDT) and to conduct fallback performance of the DDT, during a trip. Such vehicles create the need for novel human-machine interfaces (HMIs) designed to conduct high-level vehicle control tasks. Multimodal interfaces (MMIs) have advantages such as improved recognition, faster interaction, and situation-adaptability, over unimodal interfaces. In this study, we developed and evaluated a MMI system with three input modalities; touchscreen, hand-gesture, and haptic to input tactical-level control commands (e.g. lane-changing, overtaking, and parking). We conducted driving experiments in a driving simulator to evaluate the effectiveness of the MMI system. The results show that multimodal HMI significantly reduced the driver workload, improved the efficiency of interaction, and minimized input errors compared with unimodal interfaces. Moreover, we discovered relationships between input types and modalities: location-based inputs-touchscreen interface, time-critical inputs-haptic interface. The results proved the functional advantages and effectiveness of multimodal interface system over its unimodal components for conducting tactical-level driving tasks.

[1]  Arthur D. Fisk,et al.  Touch a Screen or Turn a Knob: Choosing the Best Device for the Job , 2005, Hum. Factors.

[2]  Neville A Stanton,et al.  To twist or poke? A method for identifying usability issues with the rotary controller and touch screen for control of in-vehicle information systems , 2011, Ergonomics.

[3]  Yvonne Rogers,et al.  Enhancing Navigation Information with Tactile Output Embedded into the Steering Wheel , 2009, Pervasive.

[4]  Christian A. Müller,et al.  Multimodal Input in the Car, Today and Tomorrow , 2011, IEEE MultiMedia.

[5]  Sharon L. Oviatt,et al.  Ten myths of multimodal interaction , 1999, Commun. ACM.

[6]  P. Cochat,et al.  Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.

[7]  John A. Michon,et al.  A critical view of driver behavior models: What do we know , 1985 .

[8]  Keith J. Burnham,et al.  A Research Study of Hand Gesture Recognition Technologies and Applications for Human Vehicle Interaction , 2007 .

[9]  Shigeki Sugano,et al.  A haptic feedback driver-vehicle interface for controlling lateral and longitudinal motions of autonomous vehicles , 2016, 2016 IEEE International Conference on Advanced Intelligent Mechatronics (AIM).

[10]  Andreas Riener,et al.  Sensor-actuator supported implicit interaction in driver assistance systems , 2010, Ausgezeichnete Informatikdissertationen.

[11]  J. V. Erp,et al.  Vibrotactile in-vehicle navigation system , 2004 .

[12]  Mohan M. Trivedi,et al.  Hand Gesture Recognition in Real Time for Automotive Interfaces: A Multimodal Vision-Based Approach and Evaluations , 2014, IEEE Transactions on Intelligent Transportation Systems.

[13]  Sandro Castronovo,et al.  Local danger warnings for drivers: the effect of modality and level of assistance on driver reaction , 2010, IUI '10.

[14]  Shigeki Sugano,et al.  Analysis of Preference for Autonomous Driving Under Different Traffic Conditions Using a Driving Simulator , 2015, J. Robotics Mechatronics.

[15]  Shigeki Sugano,et al.  Analysis of individual driving experience in autonomous and human-driven vehicles using a driving simulator , 2015, 2015 IEEE International Conference on Advanced Intelligent Mechatronics (AIM).

[16]  Sharon Oviatt,et al.  Multimodal Interfaces , 2008, Encyclopedia of Multimedia.

[17]  Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles , 2022 .