A Fusion Approach for Pre-Crash Scenarios based on Lidar and Camera Sensors

The use and development of new Advanced Driver Assistance Systems (ADAS) based on fusion of various sensors, aiming to achieve a higher robustness, are becoming more common. In pre-crash scenarios, the extraction of information from an incoming vehicle should be done as near as possible to the moment of impact "t0", where relevant issues such as disturbances in the sensor signal and partial occlusion of the vehicle play an important role. The presented work introduces a novel fusion approach between a Lidar and a camera to extract the distance and the approach angle of an oncoming vehicle. It detects the bullet vehicle’s license plate using a deep learning neural network and fuses the Lidar information to estimate its distance and approach angle. The proposed architecture was evaluated on data from a benchmark dataset - KITTI, and with a pre-crash scenario dataset collected inside the CARISSMA Research Center indoor hall facilities. The algorithm reached an accuracy of 0.16m±0.26m for the distance and 5.48°±4.99° for the angle estimation on the benchmark dataset. On the pre-crash dataset, the experimental results yielded an accuracy of 0.022m ± 0.022m and 2.82°± 1.98° for the distance and angle, respectively. The system can extract parameters until 0.4m before a collision, and can overcome blurriness and partial occlusion conditions, being also easily reproducible in other sensor setups.

[1]  Ali Farhadi,et al.  YOLOv3: An Incremental Improvement , 2018, ArXiv.

[2]  Andreas Geiger,et al.  Vision meets robotics: The KITTI dataset , 2013, Int. J. Robotics Res..

[3]  Hong-Yuan Mark Liao,et al.  YOLOv4: Optimal Speed and Accuracy of Object Detection , 2020, ArXiv.

[4]  T. Brandmeier,et al.  Evaluation of Sensor Tolerances and Inevitability for Pre-Crash Safety Systems in Real Case Scenarios , 2020, 2020 IEEE 3rd Connected and Automated Vehicles Symposium (CAVS).

[5]  Dagmar Steinhauser,et al.  An Experimental Analysis of Rain Interference on Detection and Ranging Sensors , 2020, 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC).

[6]  Steven Lake Waslander,et al.  Joint 3D Proposal Generation and Object Detection from View Aggregation , 2017, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[7]  Adam Ziebinski,et al.  A Survey of ADAS Technologies for the Future Perspective of Sensor Fusion , 2016, ICCCI.

[8]  Zhixin Wang,et al.  Frustum ConvNet: Sliding Frustums to Aggregate Local Point-Wise Features for Amodal , 2019, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[9]  Hongdong Li,et al.  UPnP: An Optimal O(n) Solution to the Absolute Pose Problem with Universal Applicability , 2014, ECCV.

[10]  Leonidas J. Guibas,et al.  Frustum PointNets for 3D Object Detection from RGB-D Data , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[11]  Jian Sun,et al.  Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  D. Marquardt An Algorithm for Least-Squares Estimation of Nonlinear Parameters , 1963 .

[13]  Nobuo Kawaguchi,et al.  Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard , 2017, Remote. Sens..

[14]  Shu Liu,et al.  Path Aggregation Network for Instance Segmentation , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[15]  Jun-Wei Hsieh,et al.  CSPNet: A New Backbone that can Enhance Learning Capability of CNN , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[16]  Qiang Xu,et al.  nuScenes: A Multimodal Dataset for Autonomous Driving , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).