Simultaneous End-User Programming of Goals and Actions for Robotic Shelf Organization

Arrangement of items on shelves in stores or warehouses is a tedious, repetitive task that can be feasible for robots to perform. The diversity of products that are available in stores and the different setups and preferences of each store makes pre-programming a robot for this task extremely challenging. Instead, our work argues for enabling end-users to customize the robot to their specific objects and setup at deployment time by programming it themselves. To that end, this paper contributes (i) a task representation for shelf arrangements based on a large dataset of grocery store shelf images, (ii) a method for inferring goal configurations from user inputs including demonstrations and direct parameter specifications, and (iii) a system implementation of the proposed approach that allows simultaneously learning task goals and actions. We evaluate our goal inference approach with ten different teaching strategies that combine alternative user inputs in different ways on the large dataset of grocery configurations, as well as with real human teachers through an online user study $\pmb{(\text{N}=32)}$. We evaluate our full system implemented on a Fetch mobile manipulator on eight benchmark tasks that demonstrate end-to-end programming and execution of shelf arrangement tasks.

[1]  Bilge Mutlu,et al.  How Do Humans Teach: On Curriculum Learning and Teaching Dimension , 2011, NIPS.

[2]  S. Srinivasa,et al.  Push-grasping with dexterous hands: Mechanics and a method , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[3]  Andrea Lockerd Thomaz,et al.  An evaluation of GUI and kinesthetic teaching methods for constrained-keyframe skills , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[4]  Siddhartha S. Srinivasa,et al.  Nonprehensile whole arm rearrangement planning on physics manifolds , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[5]  Rüdiger Dillmann,et al.  Incremental Learning of Tasks From User Demonstrations, Past Experiences, and Vocal Comments , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[6]  Danica Kragic,et al.  Robot Learning from Demonstration: A Task-level Planning Approach , 2008 .

[7]  Matei T. Ciocarlie,et al.  ROS commander (ROSCo): Behavior creation for home robots , 2013, 2013 IEEE International Conference on Robotics and Automation.

[8]  Takayuki Kanda,et al.  Human-robot interaction design using Interaction Composer eight years of lessons learned , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[9]  Tony Belpaeme,et al.  A model for inferring the intention in imitation tasks , 2006, ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication.

[10]  Andrea Lockerd Thomaz,et al.  Simultaneously learning actions and goals from demonstration , 2016, Auton. Robots.

[11]  Wolfram Burgard,et al.  Robot, organize my shelves! Tidying up objects by predicting user preferences , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[12]  Elin Anna Topp,et al.  Simplified Programming of Re-Usable Skills on a Safe Industrial Robot - Prototype and Evaluation , 2017, 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI.

[13]  Manuel Lopes,et al.  Algorithmic and Human Teaching of Sequential Decision Tasks , 2012, AAAI.

[14]  Maya Cakmak,et al.  Code3: A System for End-to-End Programming of Mobile Manipulator Robots for Novices and Experts , 2017, 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI.

[15]  Brett Browning,et al.  A survey of robot learning from demonstration , 2009, Robotics Auton. Syst..

[16]  Oliver Brock,et al.  Analysis and Observations From the First Amazon Picking Challenge , 2016, IEEE Transactions on Automation Science and Engineering.

[17]  Stefan Schaal,et al.  Robot Programming by Demonstration , 2009, Springer Handbook of Robotics.

[18]  Maya Cakmak,et al.  Robot Programming by Demonstration with Interactive Action Visualizations , 2014, Robotics: Science and Systems.

[19]  Emilia I. Barakova,et al.  End-user programming architecture facilitates the uptake of robots in social therapies , 2013, Robotics Auton. Syst..

[20]  Maya Cakmak,et al.  Design and evaluation of a rapid programming system for service robots , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[21]  Jun Nakanishi,et al.  Learning Movement Primitives , 2005, ISRR.

[22]  Maya Cakmak,et al.  RoboFlow: A flow-based visual programming language for mobile manipulation tasks , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[23]  Aude Billard,et al.  Statistical Learning by Imitation of Competing Constraints in Joint Space and Task Space , 2009, Adv. Robotics.

[24]  Scott Niekum,et al.  Learning and generalization of complex tasks from unstructured demonstrations , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[25]  Sonia Chernova,et al.  Interactive Hierarchical Task Learning from a Single Demonstration , 2015, 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[26]  Nicholas Roy,et al.  An Interaction Design Framework for Social Robots , 2012 .

[27]  Pieter Abbeel,et al.  Learning from Demonstrations Through the Use of Non-rigid Registration , 2013, ISRR.

[28]  Wolfram Burgard,et al.  The Freiburg Groceries Dataset , 2016, ArXiv.

[29]  Maya Cakmak,et al.  Keyframe-based Learning from Demonstration , 2012, Int. J. Soc. Robotics.