A Learning-from-Observation Framework: One-Shot Robot Teaching for Grasp-Manipulation-Release Household Operations

A household robot is expected to perform various manipulative operations with an understanding of the purpose of the task. To this end, robotic applications should provide an on-site robot teaching framework for non-experts. Here, we propose a Learning-from-Observation (LfO) framework for grasp-manipulation-release class household operations (GMR-operations). The framework maps human demonstrations to predefined task models through one-shot teaching. Each task model contains both high-level knowledge regarding the geometric constraints of tasks and low-level knowledge related to human postures. The key goal of this study is to design a task model that 1) covers various GMR-operations and 2) includes human postures to achieve tasks. We verify the applicability of our framework by testing the novel LfO system with a real robot. In addition, we quantify the coverage of the task model by analyzing online videos of household operations. Within the context of one-shot robot teaching, the contribution of this study is a framework that covers various GMR-operations and mimics human postures during operation.

[1]  Iori Yanokura,et al.  Verbal Focus-of-Attention System for Learning-from-Demonstration , 2020, ArXiv.

[2]  Dominik Henrich,et al.  Control flow for robust one-shot robot programming using entity-based resources , 2017, 2017 18th International Conference on Advanced Robotics (ICAR).

[3]  Matthew T. Mason,et al.  Compliance and Force Control for Computer Controlled Manipulators , 1981, IEEE Transactions on Systems, Man, and Cybernetics.

[4]  Jingxuan Li,et al.  Learning Articulated Constraints From a One-Shot Demonstration for Robot Manipulation Planning , 2019, IEEE Access.

[5]  Stefan Schaal,et al.  Robot Programming by Demonstration , 2009, Springer Handbook of Robotics.

[6]  Marcin Andrychowicz,et al.  One-Shot Imitation Learning , 2017, NIPS.

[7]  Mihai Surdeanu,et al.  The Stanford CoreNLP Natural Language Processing Toolkit , 2014, ACL.

[8]  Sergey Levine,et al.  One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning , 2018, Robotics: Science and Systems.

[9]  Stefan Schaal,et al.  Is imitation learning the route to humanoid robots? , 1999, Trends in Cognitive Sciences.

[10]  Kimitoshi Yamazaki,et al.  Learning from Demonstration Based on a Mechanism to Utilize an Object’s Invisibility , 2019, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[11]  Dominik Henrich,et al.  Robot programming by non-experts: Intuitiveness and robustness of One-Shot robot programming , 2016, 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[12]  Yiannis Demiris,et al.  One-shot assistance estimation from expert demonstrations for a shared control wheelchair system , 2015, 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[13]  Katsushi Ikeuchi,et al.  Task-Oriented Motion Mapping on Robots of Various Configuration Using Body Role Division , 2020, IEEE Robotics and Automation Letters.

[14]  Katsushi Ikeuchi,et al.  Toward automatic robot instruction from perception-mapping human grasps to manipulator grasps , 1997, IEEE Trans. Robotics Autom..

[15]  Scott Niekum,et al.  One-Shot Learning of Multi-Step Tasks from Observation via Activity Localization in Auxiliary Video , 2018, 2019 International Conference on Robotics and Automation (ICRA).

[16]  Juan Pablo Wachs,et al.  Extending Policy from One-Shot Learning through Coaching , 2019, 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN).

[17]  Aude Billard,et al.  Reinforcement learning for imitating constrained reaching movements , 2007, Adv. Robotics.

[18]  Masayuki Inaba,et al.  The Seednoid Robot Platform: Designing a Multipurpose Compact Robot From Continuous Evaluation and Lessons From Competitions , 2018, IEEE Robotics and Automation Letters.

[19]  Michael Gleicher,et al.  Inferring geometric constraints in human demonstrations , 2018, CoRL.

[20]  Katsushi Ikeuchi,et al.  Extraction of person-specific motion style based on a task model and imitation by humanoid robot , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[21]  Dominik Henrich,et al.  One-shot robot programming by demonstration using an online oriented particles simulation , 2014, 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014).

[22]  Yoshihiro Sato,et al.  Describing Upper-Body Motions Based on Labanotation for Learning-from-Observation Robots , 2016, International Journal of Computer Vision.

[23]  Rüdiger Dillmann,et al.  Towards Cognitive Robots: Building Hierarchical Task Representations of Manipulations from Human Demonstration , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[24]  T. R. Kaminski,et al.  The coordination between trunk and arm motion during pointing movements , 2004, Experimental Brain Research.

[25]  Sergey Levine,et al.  One-Shot Visual Imitation Learning via Meta-Learning , 2017, CoRL.

[26]  Yiannis Demiris,et al.  Towards One Shot Learning by imitation for humanoid robots , 2010, 2010 IEEE International Conference on Robotics and Automation.

[27]  Henk Nijmeijer,et al.  Robot Programming by Demonstration , 2010, SIMPAR.

[28]  Katsushi Ikeuchi,et al.  Recognizing Assembly Tasks Through Human Demonstration , 2007, Int. J. Robotics Res..

[29]  Sergey Levine,et al.  QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation , 2018, CoRL.

[30]  Brett Browning,et al.  A survey of robot learning from demonstration , 2009, Robotics Auton. Syst..

[31]  Jean-Paul Laumond,et al.  On the use of dance notation systems to generate movements in humanoid robots: The utility of Laban notation in robotics , 2017 .

[32]  Wendy A. Rogers,et al.  Domestic Robots for Older Adults: Attitudes, Preferences, and Potential , 2014, Int. J. Soc. Robotics.

[33]  Maya Cakmak,et al.  Keyframe-based Learning from Demonstration , 2012, Int. J. Soc. Robotics.

[34]  Atsushi Nakazawa,et al.  Learning from Observation Paradigm: Leg Task Models for Enabling a Biped Humanoid Robot to Imitate Human Dances , 2007, Int. J. Robotics Res..

[35]  Julie A. Shah,et al.  C-LEARN: Learning geometric constraints from demonstrations for multi-step manipulation in shared autonomy , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[36]  Ferdinand Freudenstein,et al.  Kinematic Synthesis of Linkages , 1965 .

[37]  Kazuhiro Sasabuchi,et al.  Grasp-type Recognition Leveraging Object Affordance , 2020, ArXiv.

[38]  Gordon Cheng,et al.  Learning tasks from observation and practice , 2004, Robotics Auton. Syst..

[39]  Sergey Levine,et al.  Deep Reinforcement Learning for Vision-Based Robotic Grasping: A Simulated Comparative Evaluation of Off-Policy Methods , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[40]  Danica Kragic,et al.  The GRASP Taxonomy of Human Grasp Types , 2016, IEEE Transactions on Human-Machine Systems.

[41]  Weijia Zhou,et al.  Deep Adversarial Imitation Learning of Locomotion Skills from One-shot Video Demonstration , 2019, 2019 IEEE 9th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER).

[42]  Kimitoshi Yamazaki,et al.  Assistive system research for creative life management on robotics and home economics , 2013, 2013 IEEE Workshop on Advanced Robotics and its Social Impacts.

[43]  Katsushi Ikeuchi,et al.  Toward an assembly plan from observation. I. Task recognition with polyhedral objects , 1994, IEEE Trans. Robotics Autom..

[44]  Katsushi Ikeuchi,et al.  Representation for knot-tying tasks , 2006, IEEE Transactions on Robotics.