DLRAD - A first look on the new vision and mapping benchmark dataset

DLRAD - a new vision and mapping benchmark dataset for autonomous driving is now ready for the development and validation of intelligent driving algorithms. Stationary, mobile, and airborne sensors monitored simultaneously the environment around a reference vehicle, which was driving on urban, suburb and rural roads in and around the city of Braunschweig/Germany. Airborne images were acquired with the DLR 4k sensor system mounted on a helicopter. The DLR reference car FASCarE is equipped with the latest car sensor technology like front/rear radar, ultrasound and laser sensors, optical single and stereo cameras, and GNSS/IMU. Additionally, stationary terrestrial sensors like induction loops, optical mono and stereo cameras, radar and laser scanners monitor defined sections of the path from the ground. The stationary sensors are installed on gantries at main crossings and on pylons. The benchmark path with total length of 156km is divided in an urban road scenario (34km), a rural road scenario (50km), an industrial area scenario (26km), and a motorway scenario (46 km). Simultaneously, the helicopter with the 4k sensor systems follows the reference car by keeping it all the time in the central nadir view. Two 20MPix full frame nadir looking cameras with focal lengths of 50mm and 25mm cover the area around the reference car staggered according to the distance from the reference car with different GSDs of 7cm resp. 14cm. With a focal length of 50mm an area of 320m x 240m is covered assuming a flight height of 500m above ground. With frame rates around 1 Hz, it will be possible to create a 3D reference map and database with the positions of all moving and non-moving objects around the reference car including pedestrians, cyclists and all kinds of vehicles. This database will be augmented with the data from the stationary sensors to have a more detailed view at defined sections. A next crucial step in the construction of the DLRAD benchmark dataset is the annotation of all objects in the reference dataset. The reference vehicle FASCarE is a Volkswagen eGolf. It is equipped with different range detectors as follows: each four rear and front ultrasound sensors for the close range detection < 5m, three front and one rear IBEO laser scanner with range 200m, each two front and rear SMS radar, and one front Bosch radar. Additionally, an optical Bosch camera is installed at the front window and a stereo camera system is installed at the car roof for 3D and object detection purposes. The DLRAD benchmark dataset enables a huge variety of validation capabilities and opens a wide field of possibilities for the development, training and validation of machine learning algorithms in the context of autonomous driving. In this paper, we will present details of the sensor configurations and the acquisition campaign, which had taken place between the 18th July and 20th July 2017 in Braunschweig/Germany. Also, we show a first analysis of the data including the completeness and geometrical quality.

[1]  Heiko Hirschmüller,et al.  Stereo Processing by Semiglobal Matching and Mutual Information , 2008, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  Julius Ziegler,et al.  Making Bertha Drive—An Autonomous Journey on a Historic Route , 2014, IEEE Intelligent Transportation Systems Magazine.

[3]  Peter Kontschieder,et al.  The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[4]  Min Bai,et al.  TorontoCity: Seeing the World with a Million Eyes , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[5]  Sanja Fidler,et al.  HD Maps: Fine-Grained Road Segmentation by Parsing Ground and Aerial Images , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Michael Eineder,et al.  Differential geodetic stereo SAR with TerraSAR-X by exploiting small multi-directional radar reflectors , 2016, Journal of Geodesy.

[7]  Peter Reinartz,et al.  Performance of a real-time sensor and processing system on a helicopter , 2014 .

[8]  Sebastian Ramos,et al.  The Cityscapes Dataset for Semantic Urban Scene Understanding , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  P. Reinartz,et al.  Semiglobal Matching Results on the ISPRS Stereo Matching Benchmark , 2012 .

[10]  Andreas Geiger,et al.  Vision meets robotics: The KITTI dataset , 2013, Int. J. Robotics Res..

[11]  Jue Wang,et al.  TAD16K: An enhanced benchmark for autonomous driving , 2017, 2017 IEEE International Conference on Image Processing (ICIP).

[12]  Ruigang Yang,et al.  The ApolloScape Dataset for Autonomous Driving , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[13]  Peter Fischer,et al.  Validation of HD maps for autonomous driving , 2017 .