Are we ready for beyond-application high-volume data? The Reeds robot perception benchmark dataset

This paper presents a dataset, called Reeds, for research on robot perception algorithms. The dataset aims to provide demanding benchmark opportunities for algorithms, rather than providing an environment for testing applicationspecific solutions. A boat was selected as a logging platform in order to provide highly dynamic kinematics. The sensor package includes six high-performance vision sensors, two long-range lidars, radar, as well as GNSS and an IMU. The spatiotemporal resolution of sensors were maximized in order to provide large variations and flexibility in the data, offering evaluation at a large number of different resolution presets based on the resolution found in other datasets. Reeds also provides means of a fair and reproducible comparison of algorithms, by running all evaluations on a common server backend. As the dataset contains massive-scale data, the evaluation principle also serves as a way to avoid moving data unnecessarily. It was also found that naive evaluation of algorithms, where each evaluation is computed sequentially, was not practical as the fetch and decode task of each frame would not scale well. Instead, each frame is only decoded once and then fed to all algorithms in parallel, including for GPU-based algorithms.

[1]  Bernard Ghanem,et al.  A Benchmark and Simulator for UAV Tracking , 2016, ECCV.

[2]  Roberto Cipolla,et al.  Semantic object classes in video: A high-definition ground truth database , 2009, Pattern Recognit. Lett..

[3]  Dragomir Anguelov,et al.  Scalability in Perception for Autonomous Driving: Waymo Open Dataset , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Andreas Geiger,et al.  Are we ready for autonomous driving? The KITTI vision benchmark suite , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[5]  Zhifeng Liu Performance Evaluation of Stereo and Motion Analysis on Rectified Image Sequences , 2007 .

[6]  Dan B. Goldman,et al.  Vignette and exposure calibration and compensation , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[7]  Hang Yin,et al.  Test Your Self-Driving Algorithm: An Overview of Publicly Available Driving Datasets and Virtual Testing Environments , 2019, IEEE Transactions on Intelligent Vehicles.

[8]  Murari Mandal,et al.  MOR-UAV: A Benchmark Dataset and Baselines for Moving Object Recognition in UAV Videos , 2020, ACM Multimedia.

[9]  Yi-Ting Chen,et al.  The H3D Dataset for Full-Surround 3D Multi-Object Detection and Tracking in Crowded Urban Scenes , 2019, 2019 International Conference on Robotics and Automation (ICRA).

[10]  Christian Berger,et al.  Systematic benchmarking for reproducibility of computer vision algorithms for real-time systems: The example of optic flow estimation , 2019, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).