AutoExp: A multidisciplinary, multi-sensor framework to evaluate human activities in self-driving cars

The adoption of self-driving cars will certainly revolutionize our lives, even though they may take more time to become fully autonomous than initially predicted. The first vehicles are already present in certain cities of the world, as part of experimental robot-taxi services. However, most existing studies focus on the navigation part of such vehicles. We currently miss methods, datasets, and studies to assess the in-cabin human component of the adoption of such technology in real-world conditions. This paper proposes an experimental framework to study the activities of occupants of self-driving cars using a multidisciplinary approach (computer vision associated with human and social sciences), particularly non-driving related activities. The framework is composed of an experimentation scenario, and a data acquisition module. We seek firstly to capture real-world data about the usage of the vehicle in the nearest possible, real-world conditions, and secondly to create a dataset containing in-cabin human activities to foster the development and evaluation of computer vision algorithms. The acquisition module records multiple views of the front seats of the vehicle (Intel RGB-D and GoPro cameras); in addition to survey data about the internal states and attitudes of participants towards this type of vehicle before, during, and after the experimentation. We evaluated the proposed framework with the realization of real-world experimentation with 30 participants (1 hour each) to study the acceptance of SDCs of SAE level 4.

[1]  G. Parkhurst,et al.  Cyclist and pedestrian trust in automated vehicles: An on-road and simulator trial , 2022, International Journal of Sustainable Transportation.

[2]  L. Tougne,et al.  DriPE: A Dataset for Human Pose Estimation in Real-World Driving Settings , 2021, 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW).

[3]  Didier Stricker,et al.  TICaM: A Time-of-flight In-car Cabin Monitoring Dataset , 2021, BMVC.

[4]  Yoram Shiftan,et al.  What do we (Not) know about our future with automated vehicles? , 2021 .

[5]  Lu Feng,et al.  DeepTake: Prediction of Driver Takeover Behavior using Multimodal Data , 2020, CHI.

[6]  Luis Salgado,et al.  DMD: A Large-Scale Multi-Modal Driver Monitoring Dataset for Attention and Alertness Analysis , 2020, ECCV Workshops.

[7]  Hans-Peter Beise,et al.  SVIRO: Synthetic Vehicle Interior Rear Seat Occupancy Dataset and Benchmark , 2020, 2020 IEEE Winter Conference on Applications of Computer Vision (WACV).

[8]  Rainer Stiefelhagen,et al.  Drive&Act: A Multi-Modal Dataset for Fine-Grained Driver Behavior Recognition in Autonomous Vehicles , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[9]  Anthony D. McDonald,et al.  Classification of Driver Distraction: A Comprehensive Analysis of Feature Generation, Machine Learning, and Input Measures , 2019, Hum. Factors.

[10]  Dariu M. Gavrila,et al.  DD-Pose - A large-scale Driver Head Pose Benchmark , 2019, 2019 IEEE Intelligent Vehicles Symposium (IV).

[11]  Patricia L. Mokhtarian,et al.  How do activities conducted while commuting influence mode choice? Using revealed preference models to inform public transportation advantage and autonomous vehicle scenarios , 2019, Transportation Research Part A: Policy and Practice.

[12]  Shang-Hong Lai,et al.  Driver Drowsiness Detection via a Hierarchical Temporal Deep Belief Network , 2016, ACCV Workshops.

[13]  Antonio M. López,et al.  A reduced feature set for driver head pose estimation , 2016, Appl. Soft Comput..

[14]  Mohan M. Trivedi,et al.  The Power Is in Your Hands: 3D Analysis of Hand Gestures in Naturalistic Video , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops.

[15]  Albert Kircher,et al.  A Gaze-Based Driver Distraction Warning System and Its Effect on Visual Behavior , 2013, IEEE Transactions on Intelligent Transportation Systems.

[16]  K. Scherer What are emotions? And how can they be measured? , 2005 .

[17]  C. Crispim,et al.  Synthetic Driver Image Generation for Human Pose-Related Tasks , 2023, VISIGRAPP.