Complex urban dataset with multi-level sensors from highly diverse urban environments

The high diversity of urban environments, at both the inter and intra levels, poses challenges for robotics research. Such challenges include discrepancies in urban features between cities and the deterioration of sensor measurements within a city. With such diversity in consideration, this paper aims to provide Light Detection and Ranging (LiDAR) and image data acquired in complex urban environments. In contrast to existing datasets, the presented dataset encapsulates various complex urban features and addresses the major issues of complex urban areas, such as unreliable and sporadic Global Positioning System (GPS) data, multi-lane roads, complex building structures, and the abundance of highly dynamic objects. This paper provides two types of LiDAR sensor data (2D and 3D) as well as navigation sensor data with commercial-level accuracy and high-level accuracy. In addition, two levels of sensor data are provided for the purpose of assisting in the complete validation of algorithms using consumer-grade sensors. A forward-facing stereo camera was utilized to capture visual images of the environment and the position information of the vehicle that was estimated through simultaneous localization mapping (SLAM) are offered as a baseline. This paper presents 3D map data generated by the SLAM algorithm in the LASer (LAS) format for a wide array of research purposes, and a file player and a data viewer have been made available via the Github webpage to allow researchers to conveniently utilize the data in a Robot Operating System (ROS) environment. The provided file player is capable of sequentially publishing large quantities of data, similar to the rosbag player. The dataset in its entirety can be found at http://irap.kaist.ac.kr/dataset.

[1]  Paul Newman,et al.  Exploiting known unknowns: Scene induced cross-calibration of lidar-stereo systems , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[2]  Ryo Kurazume,et al.  Multi-modal panoramic 3D outdoor datasets for place categorization , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[3]  Peter C. Cheeseman,et al.  Estimating uncertain spatial relationships in robotics , 1986, Proceedings. 1987 IEEE International Conference on Robotics and Automation.

[4]  Ryan M. Eustice,et al.  University of Michigan North Campus long-term vision and lidar dataset , 2016, Int. J. Robotics Res..

[5]  Claus Scheiblauer,et al.  Potree : Rendering Large Point Clouds in Web , 2016 .

[6]  Hyun Chul Roh,et al.  Complex Urban LiDAR Data Set , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[7]  John J. Leonard,et al.  The MIT Stata Center dataset , 2013, Int. J. Robotics Res..

[8]  Winston Churchill,et al.  The New College Vision and Laser Data Set , 2009, Int. J. Robotics Res..

[9]  Frank Dellaert,et al.  iSAM: Incremental Smoothing and Mapping , 2008, IEEE Transactions on Robotics.

[10]  Roland Siegwart,et al.  Challenging data sets for point cloud registration algorithms , 2012, Int. J. Robotics Res..

[11]  Luke Fletcher,et al.  A High-rate, Heterogeneous Data Set From The DARPA Urban Challenge , 2010, Int. J. Robotics Res..

[12]  Francisco Angel Moreno,et al.  The Málaga urban dataset: High-rate stereo and LiDAR in a realistic urban scenario , 2014, Int. J. Robotics Res..

[13]  Jinyong Jeong,et al.  The Road is Enough! Extrinsic Calibration of Non-overlapping Stereo Camera and LiDAR using Road Information , 2019, IEEE Robotics and Automation Letters.

[14]  Thierry Peynot,et al.  The Marulan Data Sets: Multi-sensor Perception in a Natural Environment with Challenging Conditions , 2010, Int. J. Robotics Res..

[15]  Keith Yu Kit Leung,et al.  The UTIAS multi-robot cooperative localization and mapping dataset , 2011, Int. J. Robotics Res..

[16]  Soon-Jo Chung,et al.  The Visual–Inertial Canoe Dataset , 2018, Int. J. Robotics Res..

[17]  Vijay Kumar,et al.  The Multivehicle Stereo Event Camera Dataset: An Event Camera Dataset for 3D Perception , 2018, IEEE Robotics and Automation Letters.

[18]  Paul Newman,et al.  1 year, 1000 km: The Oxford RobotCar dataset , 2017, Int. J. Robotics Res..

[19]  Roland Siegwart,et al.  The EuRoC micro aerial vehicle datasets , 2016, Int. J. Robotics Res..

[20]  Andreas Geiger,et al.  Vision meets robotics: The KITTI dataset , 2013, Int. J. Robotics Res..

[21]  Hyun Chul Roh,et al.  Aerial Image Based Heading Correction for Large Scale SLAM in an Urban Canyon , 2017, IEEE Robotics and Automation Letters.

[22]  Randall Smith,et al.  Estimating uncertain spatial relationships in robotics , 1986, Proceedings. 1987 IEEE International Conference on Robotics and Automation.

[23]  Ryan M. Eustice,et al.  Ford Campus vision and lidar data set , 2011, Int. J. Robotics Res..

[24]  Ruigang Yang,et al.  The ApolloScape Dataset for Autonomous Driving , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[25]  Zhengyou Zhang,et al.  A Flexible New Technique for Camera Calibration , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[26]  Sebastian Ramos,et al.  The Cityscapes Dataset for Semantic Urban Scene Understanding , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[27]  Tobi Delbrück,et al.  The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM , 2016, Int. J. Robotics Res..