Robot Skill Learning based on Interacting with RGB-D Image

The ability of humans to acquire skills from teachers and experience is an advanced inspiration for intelligent robots research. This paper is about a robot skill acquisition method based on learning from demonstration (LfD). The image-interaction demonstration (I2D) is proposed according to the characteristics of the depth camera which can reflect three-dimensions (3D) information. We control the robot to complete a series of demonstration actions by interacting with an object in the image and selecting an action in the action set. Then the skill learning model is presented to derive a policy from demonstration. The model includes an objects list network and a policy learning network. They learn different information from demonstration data to achieve skill acquisition. During testing, objects network outputs n objects and their states are part of the inputs of policy network. An action and a target object are predicted to control robot's manipulation. Experiments performed on a UR5 robot show that the Block Stacking task can be efficiently acquired by our LfD model.