RADIATE: A Radar Dataset for Automotive Perception

Datasets for autonomous cars are essential for the development and benchmarking of perception systems. However, most existing datasets are captured with camera and LiDAR sensors in good weather conditions. In this paper, we present the RAdar Dataset In Adverse weaThEr (RADIATE), aiming to facilitate research on object detection, tracking and scene understanding using radar sensing for safe autonomous driving. RADIATE includes 3 hours of annotated radar images with more than 200K labelled road actors in total, on average about 4.6 instances per radar image. It covers 8 different categories of actors in a variety of weather conditions (e.g., sun, night, rain, fog and snow) and driving scenarios (e.g., parked, urban, motorway and suburban), representing different levels of challenge. To the best of our knowledge, this is the first public radar dataset which provides high-resolution radar images on public roads with a large amount of road actors labelled. The data collected in adverse weather, e.g., fog and snowfall, is unique. Some baseline results of radar based object detection and recognition are given to show that the use of radar data is promising for automotive applications in bad weather, where vision and LiDAR can fail. RADIATE also has stereo images, 32-channel LiDAR and GPS data, directed at other applications such as sensor fusion, localisation and mapping. The public dataset can be accessed at this http URL.

[1]  Andy Stove,et al.  Low-THz Radar, Lidar and Optical Imaging through Artificially Generated Fog , 2017 .

[2]  Andrew M. Wallace,et al.  RADIO: Parameterized Generative Radar Data Augmentation for Small Datasets , 2020 .

[3]  Morgan Quigley,et al.  ROS: an open-source Robot Operating System , 2009, ICRA 2009.

[4]  Michael Meyer,et al.  Automotive Radar Dataset for Deep Learning Based 3D Object Detection , 2019, 2019 16th European Radar Conference (EuRAD).

[5]  Amin Ansari,et al.  Vehicle Detection With Automotive Radar Using Deep Learning on Range-Azimuth-Doppler Tensors , 2019, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW).

[6]  Jiebo Luo,et al.  DOTA: A Large-Scale Dataset for Object Detection in Aerial Images , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[7]  Andreas Geiger,et al.  Vision meets robotics: The KITTI dataset , 2013, Int. J. Robotics Res..

[8]  M. Cherniakov,et al.  Rain Attenuation at Millimeter Wave and Low-THz Frequencies , 2020, IEEE Transactions on Antennas and Propagation.

[9]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[10]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  M. Cherniakov,et al.  Experimental study on low‐THz automotive radar signal attenuation during snowfall , 2019, IET Radar, Sonar & Navigation.

[12]  Dragomir Anguelov,et al.  Scalability in Perception for Autonomous Driving: Waymo Open Dataset , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  Deva Ramanan,et al.  Efficiently Scaling up Crowdsourced Video Annotation , 2012, International Journal of Computer Vision.

[14]  Sen Wang,et al.  300 GHz Radar Object Recognition based on Deep Neural Networks and Transfer Learning , 2019, IET Radar, Sonar & Navigation.

[15]  Jinyong Jeong,et al.  MulRan: Multimodal Range Dataset for Urban Place Recognition , 2020, 2020 IEEE International Conference on Robotics and Automation (ICRA).

[16]  Luc Van Gool,et al.  Semantic Foggy Scene Understanding with Synthetic Data , 2017, International Journal of Computer Vision.

[17]  Matti Kutila,et al.  Automotive LiDAR performance verification in fog and rain , 2018, 2018 21st International Conference on Intelligent Transportation Systems (ITSC).

[18]  Dragomir Anguelov,et al.  Scalability in Perception for Autonomous Driving: An Open Dataset Benchmark , 2019 .

[19]  Jesse S. Jin,et al.  Tracking Using CamShift Algorithm and Multiple Quantized Feature Spaces , 2004, VIP.

[20]  Gerald S. Buller,et al.  Full Waveform LiDAR for Adverse Weather Conditions , 2020, IEEE Transactions on Vehicular Technology.

[21]  Paul Newman,et al.  The Oxford Radar RobotCar Dataset: A Radar Extension to the Oxford RobotCar Dataset , 2020, 2020 IEEE International Conference on Robotics and Automation (ICRA).

[22]  Shaul Oron,et al.  Road Scene Understanding by Occupancy Grid Learning from Sparse Radar Clusters using Semantic Segmentation , 2019, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW).

[23]  Andreas Ullrich,et al.  Online waveform processing for demanding target situations , 2014, Defense + Security Symposium.

[24]  Zhongyu Jiang,et al.  RODNet: Object Detection under Severe Conditions Using Vision-Radio Cross-Modal Supervision , 2020, ArXiv.

[25]  Andrew M. Wallace,et al.  Interacting Vehicle Trajectory Prediction with Convolutional Recurrent Neural Networks , 2020, 2020 IEEE International Conference on Robotics and Automation (ICRA).

[26]  Werner Ritter,et al.  A Benchmark for Lidar Sensors in Fog: Is Detection Breaking Down? , 2018, 2018 IEEE Intelligent Vehicles Symposium (IV).

[27]  Luc Van Gool,et al.  The Pascal Visual Object Classes (VOC) Challenge , 2010, International Journal of Computer Vision.

[28]  Qiang Xu,et al.  nuScenes: A Multimodal Dataset for Autonomous Driving , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Simon Lucey,et al.  Argoverse: 3D Tracking and Forecasting With Rich Maps , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).