An automated pick-and-place benchmarking system in robotics

The reproducibility and objective assessment of a robot's manipulation ability is a crucial topic in robotics. The definition of testing protocols and the automation of such test procedures is challenging. Most evaluation protocols in benchmarks and competitions include not completely standardized interactions of a human/referee and are furthermore influenced by partially subjective criteria across different research areas. A promising and more objective solution is to automate benchmarking protocols for a range of reproducible manipulation tasks. We present an approach that is simplified to the essential elements, in which most of the process and evaluation is automated. The proposed system has low price and effort requirements and allows researchers to compare their robots without placing them side by side. To allow a wide range of robots to be benchmarked all parts will be available open source. In our system, an object is positioned in front of the benchmarked robot by a mobile minirobot that localizes itself on a table according to the individual task's predefined positions. By automating most of the process we ensure that the object locations tested as well as the timings are reproducible and well documented. This approach fosters the objective comparability of robot performances.

[1]  Sven Wachsmuth,et al.  ToBI - Team of Bielefeld: Enhancing Robot Behaviors and the Role of Multi-robotics in RoboCup@Home , 2016, RoboCup.

[2]  Sebastian Wrede,et al.  A middleware for collaborative research in experimental robotics , 2011, 2011 IEEE/SICE International Symposium on System Integration (SII).

[3]  Stefan Ulbrich,et al.  The OpenGRASP benchmarking suite: An environment for the comparative analysis of grasping and dexterous manipulation , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Pieter Abbeel,et al.  BigBIRD: A large-scale 3D database of object instances , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[5]  Siddhartha S. Srinivasa,et al.  The YCB object and Model set: Towards common benchmarks for manipulation research , 2015, 2015 International Conference on Advanced Robotics (ICAR).

[6]  Timo Korthals,et al.  AMiRo: A modular & customizable open-source mini robot platform , 2016, 2016 20th International Conference on System Theory, Control and Computing (ICSTCC).

[7]  Morgan Quigley,et al.  ROS: an open-source Robot Operating System , 2009, ICRA 2009.

[8]  Oliver Brock,et al.  Analysis and Observations From the First Amazon Picking Challenge , 2016, IEEE Transactions on Automation Science and Engineering.

[9]  Alexandre Bernardino,et al.  Benchmarking the Grasping Capabilities of the iCub Hand With the YCB Object and Model Set , 2016, IEEE Robotics and Automation Letters.

[10]  Charles C. Kemp,et al.  Challenges for robot manipulation in human environments [Grand Challenges of Robotics] , 2007, IEEE Robotics & Automation Magazine.

[11]  Aaron M. Dollar,et al.  Benchmarking grasping and manipulation: Properties of the Objects of Daily Living , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[12]  Rüdiger Dillmann,et al.  The KIT object models database: An object model database for object recognition, localization and manipulation in service robotics , 2012, Int. J. Robotics Res..

[13]  Pedro U. Lima,et al.  The RoCKIn@Home challenge , 2014, ISR 2014.

[14]  Luca Iocchi,et al.  RoboCup@Home: Scientific Competition and Benchmarking for Domestic Service Robots , 2009 .

[15]  Pedro U. Lima,et al.  RoCKIn and the European Robotics League: Building on RoboCup Best Practices to Promote Robot Competitions in Europe , 2016, RoboCup.