A comparison of three interaction modalities in the car: gestures, voice and touch

This paper compares an emergent interaction modality for the In-Vehicle Infotainment System (IVIS), i.e., gesturing on the steering wheel, with two more popular modalities in modern cars: touch in the central dashboard and speech. We conducted a between-subjects experiment with 20 participants for each modality to assess the interaction performance with the IVIS and the impact on the driving performance. Moreover, we compared the three modalities in terms of usability, subjective workload and emotional response. The results showed no statically significant differences between the three interaction modalities regarding the various indicators for the driving task performance, while significant differences were found in measures of IVIS interaction performance: users performed less interactions to complete the secondary tasks with the speech modality, while, in average, a lower task completion time was registered with the touch modality. The three interfaces were comparable in terms of perceived usability, mental workload and emotional response.

[1]  Paul Green,et al.  Development and Evaluation of Automotive Speech Interfaces: Useful Information from the Human Factors and the Related Literature , 2013 .

[2]  Elena Mugellini,et al.  Gesturing on the Steering Wheel: a User-elicited taxonomy , 2014, AutomotiveUI.

[3]  Frank Flemisch,et al.  Automation spectrum, inner / outer compatibility and other potentially useful human factors concepts for assistance and automation , 2008 .

[4]  Albrecht Schmidt,et al.  Multimodal interaction in the car: combining speech and gestures on the steering wheel , 2012, AutomotiveUI.

[5]  Yuta Sugiura,et al.  Multi-touch steering wheel for in-car tertiary applications using infrared sensors , 2014, AH.

[6]  Jeff Sauro,et al.  The Factor Structure of the System Usability Scale , 2009, HCI.

[7]  Michael G. Lenné,et al.  Effects on driving performance of interacting with an in-vehicle music player: a comparison of three interface layout concepts for information presentation. , 2011, Applied ergonomics.

[8]  Meredith Ringel Morris,et al.  Understanding users' preferences for surface gestures , 2010, Graphics Interface.

[9]  Christian A. Müller,et al.  Geremin": 2D microgestures for drivers based on electric field sensing , 2011, IUI '11.

[10]  Albrecht Schmidt,et al.  Gestural interaction on the steering wheel: reducing the visual demand , 2011, CHI.

[11]  Brad A. Myers,et al.  Eyes on the road, hands on the wheel: thumb-based interaction techniques for input on steering wheels , 2007, GI '07.

[12]  J. F. Kelley,et al.  An iterative design methodology for user-friendly natural language office information applications , 1984, TOIS.

[13]  A. Pauzie,et al.  A method to assess the driver mental workload: The driving activity load index (DALI) , 2008 .

[14]  Mikael B. Skov,et al.  You can touch, but you can't look: interacting with in-vehicle systems , 2008, CHI.

[15]  Mark Vollrath,et al.  Accident Analysis and Prevention , 2009 .

[16]  Elena Mugellini,et al.  Opportunistic synergy: a classifier fusion engine for micro-gesture recognition , 2013, AutomotiveUI.

[17]  S. Hart,et al.  Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research , 1988 .

[18]  Alex Chaparro,et al.  Texting while driving: is speech-based texting less risky than handheld texting? , 2013, AutomotiveUI.

[19]  J. B. Brooke,et al.  SUS: A 'Quick and Dirty' Usability Scale , 1996 .