Effects of Urgency and Cognitive Load on Interaction in Highly Automated Vehicles

In highly automated vehicles, passengers can engage in non-driving-related activities. Additionally, the technical advancement allows for novel interaction possibilities such as voice, gesture, gaze, touch, or multimodal interaction, both to refer to in-vehicle and outside objects (e.g., thermostat or restaurant). This interaction can be characterized by levels of urgency (e.g., based on late detection of objects) and cognitive load (e.g., because of watching a movie or working). Therefore, we implemented a Virtual Reality simulation and conducted a within-subjects study with N=11 participants evaluating the effects of urgency and cognitive load on modality usage in automated vehicles. We found that while all modalities were possible to use, participants relied on touch the most. This was followed by gaze, especially for external referencing. This work helps to further understand multimodal interaction and the requirements this poses on natural interaction in (automated) vehicles.

[1]  E. Rukzio,et al.  AutoVis: Enabling Mixed-Immersive Analysis of Automotive User Interface Interaction Studies , 2023, CHI.

[2]  E. Rukzio,et al.  A Systematic Evaluation of Solutions for the Final 100m Challenge of Highly Automated Vehicles , 2022, Proc. ACM Hum. Comput. Interact..

[3]  E. Rukzio,et al.  Towards Implicit Interaction in Highly Automated Vehicles - A Systematic Literature Review , 2022, Proc. ACM Hum. Comput. Interact..

[4]  E. Rukzio,et al.  Introducing VAMPIRE – Using Kinaesthetic Feedback in Virtual Reality for Automated Driving Experiments , 2022, UI.

[5]  E. Rukzio,et al.  A Design Space for Human Sensor and Actuator Focused In-Vehicle Interaction Based on a Systematic Literature Review , 2022, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[6]  E. Rukzio,et al.  Effects of Scene Detection, Scene Prediction, and Maneuver Planning Visualizations on Trust, Situation Awareness, and Cognitive Load in Highly Automated Vehicles , 2022, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[7]  Abdul Rafey Aftab,et al.  Multimodal Driver Referencing: A Comparison of Pointing to Objects Inside and Outside the Vehicle , 2022, IUI.

[8]  Austen Rainer,et al.  Recruiting credible participants for field studies in software engineering research , 2021, Inf. Softw. Technol..

[9]  M. Feld,et al.  ML-PersRef: A Machine Learning-based Personalized Multimodal Fusion Approach for Referencing Outside Objects From a Moving Vehicle , 2021, ICMI.

[10]  E. Rukzio,et al.  Resync: Towards Transferring Somnolent Passengers to Consciousness , 2021, MobileHCI.

[11]  Mark Colley,et al.  ORIAS: On-The-Fly Object Identification and Action Selection for Highly Automated Vehicles , 2021, AutomotiveUI.

[12]  Joana Hois,et al.  ExplAIn Yourself! Transparency for Positive UX in Autonomous Driving , 2021, CHI.

[13]  Mark Colley,et al.  Effects of Semantic Segmentation Visualization on Trust, Situation Awareness, and Cognitive Load in Highly Automated Vehicles , 2021, CHI.

[14]  Seungjun Kim,et al.  How to Interact with a Fully Autonomous Vehicle: Naturalistic Ways for Drivers to Intervene in the Vehicle System While Performing Non-Driving Related Tasks , 2021, Sensors.

[15]  Michael Feld,et al.  You Have a Point There: Object Selection Inside an Automobile Using Gaze, Head Pose and Finger Pointing , 2020, ICMI.

[16]  Henrik Detjen,et al.  Maneuver-based Control Interventions During Automated Driving: Comparing Touch, Voice, and Mid-Air Gestures as Input Modalities , 2020, 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC).

[17]  M. Feld,et al.  Studying Person-Specific Pointing and Gaze Behavior for Multimodal Referencing of Outside Objects from a Moving Vehicle , 2020, ICMI.

[18]  Mark Colley,et al.  Effect of Visualization of Pedestrian Intention Recognition on Trust and Cognitive Load , 2020, AutomotiveUI.

[19]  Bastian Pfleging,et al.  A Wizard of Oz Field Study to Understand Non-Driving-Related Activities, Trust, and Acceptance of Automated Vehicles , 2020, AutomotiveUI.

[20]  Wendy Ju,et al.  Aladdin’s magic carpet: Navigation by in-air static hand gesture in autonomous vehicles , 2020, Int. J. Hum. Comput. Interact..

[21]  Seungjun Kim,et al.  A Cascaded Multimodal Natural User Interface to Reduce Driver Distraction , 2020, IEEE Access.

[22]  Klaus Bengler,et al.  From HMI to HMIs: Towards an HMI Framework for Automated Driving , 2020, Inf..

[23]  Kai Holländer,et al.  InCarAR: A Design Space Towards 3D Augmented Reality Applications in Vehicles , 2019, AutomotiveUI.

[24]  Stefan Schneegaß,et al.  User-Defined Voice and Mid-Air Gesture Commands for Maneuver-based Interventions in Automated Vehicles , 2019, MuC.

[25]  Gerhard Rigoll,et al.  Catch My Drift: Elevating Situation Awareness for Highly Automated Driving with an Explanatory Windshield Display User Interface , 2018, Multimodal Technol. Interact..

[26]  Tom Gross,et al.  I See Your Point: Integrating Gaze to Enhance Pointing Gesture Accuracy While Driving , 2018, AutomotiveUI.

[27]  Moritz Körber,et al.  Theoretical Considerations and Development of a Questionnaire to Measure Trust in Automation , 2018, Advances in Intelligent Systems and Computing.

[28]  Philipp Wintersberger,et al.  Fostering User Acceptance and Trust in Fully Automated Vehicles: Evaluating the Potential of Augmented Reality , 2018, PRESENCE: Virtual and Augmented Reality.

[29]  Marc Erich Latoschik,et al.  “Stop over there”: natural gesture and speech interaction for non-critical spontaneous intervention in autonomous driving , 2017, ICMI.

[30]  Tom Gross,et al.  The Effects of Situational Demands on Gaze, Speech and Gesture Input in the Vehicle , 2017, AutomotiveUI.

[31]  Michael Weber,et al.  Touch Screen Maneuver Approval Mechanisms for Highly Automated Vehicles: A First Evaluation , 2017, AutomotiveUI.

[32]  Louis-Philippe Morency,et al.  Multimodal Machine Learning: A Survey and Taxonomy , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[33]  Andreas Butz,et al.  SupportingTrust in Autonomous Driving , 2017, IUI.

[34]  Bastian Pfleging,et al.  Investigating user needs for non-driving-related activities during automated driving , 2016, MUM.

[35]  M. Baumann,et al.  Towards Cooperative Driving: Involving the Driver in an Autonomous Vehicle's Decision Making , 2016, AutomotiveUI.

[36]  Stephen A. Brewster,et al.  Investigating Pressure Input and Haptic Feedback for In-Car Touchscreens and Touch Surfaces , 2016, AutomotiveUI.

[37]  Shigeki Sugano,et al.  A hand gesture based driver-vehicle interface to control lateral and longitudinal motions of an autonomous vehicle , 2016, 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC).

[38]  Michael Feld,et al.  Combining Speech, Gaze, and Micro-gestures for the Multimodal Control of In-Car Functions , 2016, 2016 12th International Conference on Intelligent Environments (IE).

[39]  Lee Skrypchuk,et al.  Touchscreen usability and input performance in vehicles under different road conditions: an evaluative study , 2015, AutomotiveUI.

[40]  Yuta Sugiura,et al.  Multi-touch steering wheel for in-car tertiary applications using infrared sensors , 2014, AH.

[41]  Andreas Butz,et al.  Free-hand pointing for identification and interaction with distant objects , 2013, AutomotiveUI.

[42]  Lijie Xu,et al.  Driver queries using wheel-constrained finger pointing and 3-D head-up display visual feedback , 2013, AutomotiveUI.

[43]  Angela Castronovo,et al.  Generating a Personalized UI for the Car: A User-Adaptive Rendering Architecture , 2013, UMAP.

[44]  U. Dulleck,et al.  μ-σ Games , 2012, Games.

[45]  Gerhard Rigoll,et al.  Gaze-based interaction on multiple displays in an automotive environment , 2011, 2011 IEEE International Conference on Systems, Man, and Cybernetics.

[46]  Albrecht Schmidt,et al.  Gestural interaction on the steering wheel: reducing the visual demand , 2011, CHI.

[47]  Mohan S. Kankanhalli,et al.  Multimodal fusion for multimedia analysis: a survey , 2010, Multimedia Systems.

[48]  Ralph Bruder,et al.  How to conduct a car? A design example for maneuver based driver-vehicle interaction , 2010, 2010 IEEE Intelligent Vehicles Symposium.

[49]  Albrecht Schmidt,et al.  A multi-touch enabled steering wheel: exploring the design space , 2010, CHI Extended Abstracts.

[50]  Peter Robinson,et al.  Multimodal inference for driver-vehicle interaction , 2009, ICMI-MLMI '09.

[51]  David P. Wilkins Why pointing with the index finger is not a universal (in sociocultural and semiotic terms). , 2003 .

[52]  J. B. Brooke,et al.  SUS: A 'Quick and Dirty' Usability Scale , 1996 .

[53]  T. Ropinski,et al.  AutoTherm: A Dataset and Ablation Study for Thermal Comfort Prediction in Vehicles , 2022, ArXiv.

[54]  PROCEEDINGS of the HUMAN FACTORS SOCIETY 36th ANNUAL MEETING-1992 PSYCHOMETRIC EVALUATION OF THE POST-STUDY SYSTEM USABILITY QUESTIONNAIRE: THE PSSUQ , 2011 .

[55]  Samuel B. Williams,et al.  ASSOCIATION FOR COMPUTING MACHINERY , 2000 .

[56]  S. Hart,et al.  Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research , 1988 .