Radar Artifact Labeling Framework (RALF): Method for Plausible Radar Detections in Datasets

Research on localization and perception for Autonomous Driving is mainly focused on camera and LiDAR datasets, rarely on radar data. Manually labeling sparse radar point clouds is challenging. For a dataset generation, we propose the cross sensor Radar Artifact Labeling Framework (RALF). Automatically generated labels for automotive radar data help to cure radar shortcomings like artifacts for the application of artificial intelligence. RALF provides plausibility labels for radar raw detections, distinguishing between artifacts and targets. The optical evaluation backbone consists of a generalized monocular depth image estimation of surround view cameras plus LiDAR scans. Modern car sensor sets of cameras and LiDAR allow to calibrate image-based relative depth information in overlapping sensing areas. K-Nearest Neighbors matching relates the optical perception point cloud with raw radar detections. In parallel, a temporal tracking evaluation part considers the radar detections' transient behavior. Based on the distance between matches, respecting both sensor and model uncertainties, we propose a plausibility rating of every radar detection. We validate the results by evaluating error metrics on semi-manually labeled ground truth dataset of $3.28\cdot10^6$ points. Besides generating plausible radar detections, the framework enables further labeled low-level radar signal datasets for applications of perception and Autonomous Driving learning tasks.

[1]  Klaus C. J. Dietmayer,et al.  Grid-based DBSCAN for clustering extended objects in radar data , 2012, 2012 IEEE Intelligent Vehicles Symposium.

[2]  Sebastian Thrun,et al.  The Graph SLAM Algorithm with Applications to Large-Scale Mapping of Urban Structures , 2006, Int. J. Robotics Res..

[3]  B.-O. As,et al.  Automotive radar for adaptive cruise control and collision warning/avoidance , 1997 .

[4]  Dragomir Anguelov,et al.  Scalability in Perception for Autonomous Driving: Waymo Open Dataset , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Klaus C. J. Dietmayer,et al.  Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges , 2019, IEEE Transactions on Intelligent Transportation Systems.

[6]  Changming Sun,et al.  DiverseDepth: Affine-invariant Depth Prediction Using Diverse Data , 2020, ArXiv.

[7]  Leonidas J. Guibas,et al.  PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space , 2017, NIPS.

[8]  Hermann Winner,et al.  Real-Time Pose Graph SLAM based on Radar , 2019, 2019 IEEE Intelligent Vehicles Symposium (IV).

[9]  Wilhelm Stork,et al.  CNN-Based Lidar Point Cloud De-Noising in Adverse Weather , 2020, IEEE Robotics and Automation Letters.

[10]  Yi Yang,et al.  PointRNN: Point Recurrent Neural Network for Moving Point Cloud Processing , 2019, ArXiv.

[11]  J. M. M. Montiel,et al.  ORB-SLAM: A Versatile and Accurate Monocular SLAM System , 2015, IEEE Transactions on Robotics.

[12]  Roland Siegwart,et al.  A Toolbox for Easily Calibrating Omnidirectional Cameras , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[13]  Morgan Quigley,et al.  ROS: an open-source Robot Operating System , 2009, ICRA 2009.

[14]  Radu Bogdan Rusu,et al.  3D is here: Point Cloud Library (PCL) , 2011, 2011 IEEE International Conference on Robotics and Automation.

[15]  Stephen Phillips,et al.  De-noising of Lidar Point Clouds Corrupted by Snowfall , 2018, 2018 15th Conference on Computer and Robot Vision (CRV).

[16]  Qiang Xu,et al.  nuScenes: A Multimodal Dataset for Autonomous Driving , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[17]  Rüdiger Dillmann,et al.  Informationsverarbeitung in der Robotik , 1991 .

[18]  Michael Meyer,et al.  Automotive Radar Dataset for Deep Learning Based 3D Object Detection , 2019, 2019 16th European Radar Conference (EuRAD).

[19]  Alexander Carballo,et al.  A Survey of Autonomous Driving: Common Practices and Emerging Technologies , 2019, IEEE Access.

[20]  Frederick R. Forst,et al.  On robust estimation of the location parameter , 1980 .

[21]  Luc Van Gool,et al.  The Pascal Visual Object Classes Challenge: A Retrospective , 2014, International Journal of Computer Vision.

[22]  Hans-Peter Kriegel,et al.  A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise , 1996, KDD.

[23]  Hermann Winner,et al.  Modeling and Simulation of Radar Sensor Artifacts for Virtual Testing of Autonomous Driving , 2019 .

[24]  Vladlen Koltun,et al.  Open3D: A Modern Library for 3D Data Processing , 2018, ArXiv.

[25]  Cyrill Stachniss,et al.  SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[26]  Franz Pernkopf,et al.  Complex Signal Denoising and Interference Mitigation for Automotive Radar Using Convolutional Neural Networks , 2019, 2019 22th International Conference on Information Fusion (FUSION).

[27]  Andreas Geiger,et al.  Vision meets robotics: The KITTI dataset , 2013, Int. J. Robotics Res..

[28]  Nick Schneider,et al.  Boosting LiDAR-based Semantic Labeling by Cross-Modal Training Data Generation , 2018, ECCV Workshops.

[29]  Bin Yang,et al.  Simulation of Automotive Radar Target Lists considering Clutter and Limited Resolution , 2011 .

[30]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.