Roboter lernen mit Gegenständen umzugehen: neue Entwicklungen und Chancen

ZusammenfassungFür den zukünftigen Einsatz von Robotern fordern Experten aus der Industrie Roboter, die sicher sind und dem Benutzer eine klare Rückmeldung geben, was der Roboter macht und warum. Eine der wichtigsten Funktionen, um dies zu erreichen, ist die zuverlässige Erkennung der Umgebung, um den Roboter mit einem besseren Verstehen der Welt auszustatten. In diesem Artikel geben wir eine Übersicht derzeitiger Entwicklungen. Mit der stetig ansteigenden Rechenleistung und neuen bildgebenden Sensoren, wie Stereo- oder Tiefenbildkameras, steigen die Möglichkeiten, Objekte und deren Umgebung immer besser und besser zu erkennen und zu verstehen. So können aus Bilddatenbanken eine große Anzahl an Objekten gelernt und auch wieder erkannt werden. Des Weiteren ist es möglich, Modelle auch aus den 3D CAD Daten von Objekten zu lernen. Dadurch können Klassen von Objekten erkannt werden, ohne vorher einzelne Objekte modellieren zu müssen. Zusätzlich können aus Beispielen die typischen Einrichtungsgegenstände wie Tische, Montageplätze oder Stühle und Kästen gelernt werden. Damit wird es möglich, Robotern ein erstes Verständnis der Umgebung mitzugeben. Dadurch eröffnen sich neue Anwendungen sowohl für Industrieroboter als auch Service–Roboter in der Industrie, im Bereich von Dienstleistungen und auch für zukünftige Roboter zu Hause.AbstractExperts predict that future robot applications will require safe and predictable operation: robots will need to be able to explain what they are doing to be trusted. To reach this goal, they will need to perceive their environment and its object to understand better the world and task they have to perform. This article gives an overview of present advances with the focus on options to model, detect, classify, track, grasp and manipulate objects. With the approach of colour and depth (RGB-D) cameras and the approaches in deep learning, robot vision was pushed considerably over the last years. It is possible to model and recognise objects, though prove in industrial settings is yet outstanding. Given a first detection of larger structures such as tables, chairs or assembly places, relations between object and setting can be obtained leading to a first interpretation of the scenes. We highlight present developments and point out future developments towards service and industrial robotics applications.

[1]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Danica Kragic,et al.  Vision for Robotics , 2009, Found. Trends Robotics.

[3]  Russell H. Taylor,et al.  Automatic Synthesis of Fine-Motion Strategies for Robots , 1984 .

[4]  Markus Vincze,et al.  A Global Hypothesis Verification Framework for 3D Object Recognition in Clutter , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Kostas E. Bekris,et al.  A self-supervised learning system for object detection using physics simulation and multi-view pose estimation , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[6]  Markus Vincze,et al.  Enhancing Semantic Segmentation for Robotics: The Power of 3-D Entangled Forests , 2016, IEEE Robotics and Automation Letters.

[7]  Javier Díaz,et al.  Real-Time Model-Based Rigid Object Pose Estimation and Tracking Combining Dense and Sparse Visual Cues , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[8]  Markus Vincze,et al.  RGB-D object modelling for object recognition and tracking , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[9]  Manuel Lopes,et al.  Learning Object Affordances: From Sensory--Motor Coordination to Imitation , 2008, IEEE Transactions on Robotics.

[10]  Markus Vincze,et al.  Parallel Deep Learning with Suggestive Activation for Object Category Recognition , 2013, ICVS.

[11]  Andreas Pichler,et al.  Tracking multiple rigid symmetric and non-symmetric objects in real-time using depth data , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[12]  Markus Vincze,et al.  Learning and Detecting Objects with a Mobile Robot to Assist Older Adults in Their Homes , 2016, ECCV Workshops.

[13]  Gary R. Bradski,et al.  Fast 3D recognition and pose using the Viewpoint Feature Histogram , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  Markus Vincze,et al.  Learning grasps with topographic features , 2015, Int. J. Robotics Res..

[15]  Antonis A. Argyros,et al.  Hobbit , a care robot supporting independent living at home : First prototype and lessons learned , 2015 .

[16]  Markus Vincze,et al.  Advances in real-time object tracking , 2013, Journal of Real-Time Image Processing.

[17]  W. Ramsey,et al.  The Cambridge Handbook of Artificial Intelligence , 2014 .

[18]  Markus Vincze,et al.  Industrial Priorities for Cognitive Robotics , 2016, EUCognition.

[19]  Rustam Stolkin,et al.  Learning to predict how rigid objects behave under simple manipulation , 2011, 2011 IEEE International Conference on Robotics and Automation.

[20]  J. Gibson The Ecological Approach to Visual Perception , 1979 .

[21]  Andrew E. Johnson,et al.  Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes , 1999, IEEE Trans. Pattern Anal. Mach. Intell..

[22]  Nassir Navab,et al.  A Versatile Learning-Based 3D Temporal Tracker: Scalable, Robust, Online , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[23]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[24]  Markus Vincze,et al.  Advances in real-time object tracking - Extensions for robust object tracking with a Monte Carlo particle filter , 2015, J. Real Time Image Process..

[25]  M. Vincze,et al.  BLORT-The Blocks World Robotic Vision Toolbox , 2010 .

[26]  Daniel Cremers,et al.  CopyMe3D: Scanning and Printing Persons in 3D , 2013, GCPR.

[27]  Sven Wachsmuth,et al.  Perception and computer vision , 2014 .

[28]  Markus Vincze,et al.  Ensemble of shape functions for 3D object classification , 2011, 2011 IEEE International Conference on Robotics and Biomimetics.

[29]  Armand Joulin,et al.  Deep Fragment Embeddings for Bidirectional Image Sentence Mapping , 2014, NIPS.

[30]  Markus Vincze,et al.  3DNet: Large-scale object class recognition from CAD models , 2012, 2012 IEEE International Conference on Robotics and Automation.

[31]  Ales Leonardis,et al.  One-shot learning and generation of dexterous grasps for novel objects , 2016, Int. J. Robotics Res..

[32]  Nico Blodow,et al.  Fast Point Feature Histograms (FPFH) for 3D registration , 2009, 2009 IEEE International Conference on Robotics and Automation.

[33]  Radu Bogdan Rusu,et al.  3D is here: Point Cloud Library (PCL) , 2011, 2011 IEEE International Conference on Robotics and Automation.

[34]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .