A Multimodal, Full-Surround Vehicular Testbed for Naturalistic Studies and Benchmarking: Design, Calibration and Deployment

Recent progress in autonomous and semi-autonomous driving has been made possible in part through an assortment of sensors that provide the intelligent agent with an enhanced perception of its surroundings. It has been clear for quite some while now that for intelligent vehicles to function effectively in all situations and conditions, a fusion of different sensor technologies is essential. Consequently, the availability of synchronized multi-sensory data streams are necessary to promote the development of fusion based algorithms for low, mid and high level semantic tasks. In this paper, we provide a comprehensive description of LISA-A: our heavily sensorized, full-surround testbed capable of providing high quality data from a slew of synchronized and calibrated sensors such as cameras, LIDARs, radars, and the IMU/GPS. The vehicle has recorded over 100 hours of real world data for a very diverse set of weather, traffic and daylight conditions. All captured data is accurately calibrated and synchronized using timestamps, and stored safely in high performance servers mounted inside the vehicle itself. Details on the testbed instrumentation, sensor layout, sensor outputs, calibration and synchronization are described in this paper.

[1]  Ryan M. Eustice,et al.  Ford Campus vision and lidar data set , 2011, Int. J. Robotics Res..

[2]  Mohan M. Trivedi,et al.  Looking-in and looking-out vision for Urban Intelligent Assistance: Estimation of driver attentive state and dynamic surround for safe merging and braking , 2014, 2014 IEEE Intelligent Vehicles Symposium Proceedings.

[3]  Mohan M. Trivedi,et al.  Dynamic Probabilistic Drivability Maps for Lane Change and Merge Driver Assistance , 2014, IEEE Transactions on Intelligent Transportation Systems.

[4]  Kang B. Lee,et al.  Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems , 2004 .

[5]  Andreas Geiger,et al.  Are we ready for autonomous driving? The KITTI vision benchmark suite , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[6]  Marc Pollefeys,et al.  A multiple-camera system calibration toolbox using a feature descriptor-based calibration pattern , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Sebastian Ramos,et al.  The Cityscapes Dataset for Semantic Urban Scene Understanding , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Mohan M. Trivedi,et al.  Looking at Humans in the Age of Self-Driving and Highly Automated Vehicles , 2016, IEEE Transactions on Intelligent Vehicles.

[9]  Mohan M. Trivedi,et al.  Multi-perspective vehicle detection and tracking: Challenges, dataset, and metrics , 2016, 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC).