The Newer College Dataset: Handheld LiDAR, Inertial and Vision with Ground Truth

In this paper we present a large dataset with a variety of mobile mapping sensors collected using a handheld device carried at typical walking speeds for nearly 2.2 km through New College, Oxford. The dataset includes data from two commercially available devices - a stereoscopic-inertial camera and a multi-beam 3D LiDAR, which also provides inertial measurements. Additionally, we used a tripod-mounted survey grade LiDAR scanner to capture a detailed millimeter-accurate 3D map of the test location (containing $\sim$290 million points). Using the map we inferred centimeter-accurate 6 Degree of Freedom (DoF) ground truth for the position of the device for each LiDAR scan to enable better evaluation of LiDAR and vision localisation, mapping and reconstruction systems. This ground truth is the particular novel contribution of this dataset and we believe that it will enable systematic evaluation which many similar datasets have lacked. The dataset combines both built environments, open spaces and vegetated areas so as to test localization and mapping systems such as vision-based navigation, visual and LiDAR SLAM, 3D LIDAR reconstruction and appearance-based place recognition. The dataset is available at: this http URL

[1]  Dorian Gálvez-López,et al.  Bags of Binary Words for Fast Place Recognition in Image Sequences , 2012, IEEE Transactions on Robotics.

[2]  Hyun Chul Roh,et al.  Complex urban dataset with multi-level sensors from highly diverse urban environments , 2019, Int. J. Robotics Res..

[3]  Giulio Fontana,et al.  Rawseeds ground truth collection systems for indoor self-localization and mapping , 2009, Auton. Robots.

[4]  Marc Alexa,et al.  Computing and Rendering Point Set Surfaces , 2003, IEEE Trans. Vis. Comput. Graph..

[5]  Milad Ramezani,et al.  Online LiDAR-SLAM for Legged Robots with Robust Registration and Deep-Learned Loop Closure , 2020, 2020 IEEE International Conference on Robotics and Automation (ICRA).

[6]  Gary R. Bradski,et al.  ORB: An efficient alternative to SIFT or SURF , 2011, 2011 International Conference on Computer Vision.

[7]  Winston Churchill,et al.  The New College Vision and Laser Data Set , 2009, Int. J. Robotics Res..

[8]  Roland Siegwart,et al.  Extending kalibr: Calibrating the extrinsics of multiple IMUs and of individual axes , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[9]  Radu Bogdan Rusu,et al.  3D is here: Point Cloud Library (PCL) , 2011, 2011 IEEE International Conference on Robotics and Automation.

[10]  Thierry Peynot,et al.  The Marulan Data Sets: Multi-sensor Perception in a Natural Environment with Challenging Conditions , 2010, Int. J. Robotics Res..

[11]  Roland Siegwart,et al.  Comparing ICP variants on real-world data sets , 2013, Auton. Robots.

[12]  Ryan M. Eustice,et al.  Ford Campus vision and lidar data set , 2011, Int. J. Robotics Res..

[13]  Luke Fletcher,et al.  A High-rate, Heterogeneous Data Set From The DARPA Urban Challenge , 2010, Int. J. Robotics Res..

[14]  Maurice Fallon,et al.  Robust Legged Robot State Estimation Using Factor Graph Optimization , 2019, IEEE Robotics and Automation Letters.

[15]  Zoltan-Csaba Marton,et al.  On fast surface reconstruction methods for large and noisy point clouds , 2009, 2009 IEEE International Conference on Robotics and Automation.

[16]  Davide Scaramuzza,et al.  The Zurich urban micro aerial vehicle dataset , 2017, Int. J. Robotics Res..

[17]  Kostas Daniilidis,et al.  PennCOSYVIO: A challenging Visual Inertial Odometry benchmark , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[18]  Paul J. Besl,et al.  A Method for Registration of 3-D Shapes , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[19]  Michael M. Kazhdan,et al.  Poisson surface reconstruction , 2006, SGP '06.

[20]  Jörg Stückler,et al.  The TUM VI Benchmark for Evaluating Visual-Inertial Odometry , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[21]  Juan D. Tardós,et al.  ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras , 2016, IEEE Transactions on Robotics.

[22]  Roland Siegwart,et al.  Unified temporal and spatial calibration for multi-sensor systems , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[23]  Francisco Angel Moreno,et al.  The Málaga urban dataset: High-rate stereo and LiDAR in a realistic urban scenario , 2014, Int. J. Robotics Res..

[24]  Ryan M. Eustice,et al.  University of Michigan North Campus long-term vision and lidar dataset , 2016, Int. J. Robotics Res..

[25]  Wen-Long Chin,et al.  IEEE 1588 clock synchronization using dual slave clocks in a slave , 2009, IEEE Communications Letters.

[26]  Paul Newman,et al.  1 year, 1000 km: The Oxford RobotCar dataset , 2017, Int. J. Robotics Res..

[27]  Roland Siegwart,et al.  The EuRoC micro aerial vehicle datasets , 2016, Int. J. Robotics Res..

[28]  Andreas Geiger,et al.  Vision meets robotics: The KITTI dataset , 2013, Int. J. Robotics Res..