Lighting- and Occlusion-Robust View-Based Teaching/Playback for Model-Free Robot Programming

In this paper, we investigate a model-free method for robot programming referred to as view-based teaching/playback. It uses neural networks to map factor scores of input images onto robot motions. The method can achieve greater robustness to changes in the task conditions, including the initial pose of the object, as compared to conventional teaching/playback. We devised an online algorithm for adaptively switching between range and grayscale images used in view-based teaching/playback. In its application to pushing tasks using an industrial manipulator, view-based teaching/playback using the proposed algorithm succeeded even under changing lighting conditions. We also devised an algorithm to cope with occlusions using subimages, which worked successfully in experiments.

[1]  Yusuke Maeda,et al.  View-based teaching/playback for industrial manipulators , 2011, 2011 IEEE International Conference on Robotics and Automation.

[2]  Stefan Schaal,et al.  Robot Programming by Demonstration , 2009, Springer Handbook of Robotics.

[3]  Rüdiger Dillmann,et al.  Advances in Robot Programming by Demonstration , 2010, KI - Künstliche Intelligenz.

[4]  Katsunari Shibata,et al.  Acquisition of box pushing by direct-vision-based reinforcement learning , 2003, SICE 2003 Annual Conference (IEEE Cat. No.03TH8734).

[5]  Brett Browning,et al.  A survey of robot learning from demonstration , 2009, Robotics Auton. Syst..

[6]  Shigeyuki Hosoe,et al.  Optimizing Resolution for Feature Extraction in Robotic Motion Learning , 2005, 2005 IEEE International Conference on Systems, Man and Cybernetics.

[7]  Masayuki Inaba,et al.  View-based navigation using an omniview sequence in a corridor environment , 2003, Machine Vision and Applications.

[8]  Magali R. G. Meireles,et al.  A comprehensive review for industrial applicability of artificial neural networks , 2003, IEEE Trans. Ind. Electron..

[9]  Qingjie Zhao,et al.  Appearance-based Robot Visual Servo via a Wavelet Neural Network , 2008 .

[10]  Sergey Levine,et al.  Learning Hand-Eye Coordination for Robotic Grasping with Large-Scale Data Collection , 2016, ISER.

[11]  Yusuke Maeda,et al.  View-based teaching/playback for manipulation by industrial robots , 2013 .

[12]  Yusuke Maeda,et al.  View-based teaching/playback for robotic manipulation , 2015 .

[13]  Jianwei Zhang,et al.  A neuro-fuzzy control model for fine-positioning of manipulators , 2000, Robotics Auton. Syst..

[14]  Yuki Suga,et al.  Multimodal integration learning of robot behavior using deep neural networks , 2014, Robotics Auton. Syst..

[15]  H. Abdi,et al.  Principal component analysis , 2010 .