Towards Reinforcement based Learning of an Assembly Process for Human Robot Collaboration

Abstract In an era of transformation in manufacturing demographics from mass production to mass customization, advances on human-robot interaction in industries has taken many forms. However, the aim of reducing the amount of programming required by an expert using natural modes of communication is still an open topic. We propose an approach based on Interactive Reinforcement Learning that learns a complete collaborative assembly process. The learning approach is done in two steps. First step consists of modelling simple tasks that compose the assembly process, using task-based formalism. The robotic system then uses these modelled simple tasks and proposes to the user a set of possible tasks at each step of the assembly process. The user can then select an option and thereby teaches the system which task to perform when. In order to reduce the number of actions proposed, the system considers additional information such as user and robot capabilities and object affordances. These set of action proposals are further reduced by modelling the proposed actions into a goal-based hierarchy and by including action prerequisites. The learning framework displays that it is able to learn a complicated human robot collaborative assembly process and is intuitive to the user. The framework also allows different users to teach different assembly processes to the robot. The learning framework also demonstrates its capability in dealing with novel situations during the assembly process.