Fast Prediction of a Worker’s Reaching Motion Without a Skeleton Model (F-PREMO)

This paper proposes a fast and highly accurate prediction method for the reaching motions performed by a worker in cooperative work with a robot, called fast prediction of reaching motion (F-PREMO). Cooperative work is permitted by a 2011 ISO revision under the condition that risk assessment has been performed. To increase production effectivity, it is essential to predict the worker’s movement and control the robot based on the prediction. This paper focuses on the movement of a worker’s hand, called the reaching motion, which is relevant to many types of assembling tasks. The most common existing methods require attaching markers to the workers or installing three-dimensional (3D) sensors in front of the workers since these methods require the real-time estimation of skeleton models of the workers. These requirements lead to difficulties in introducing prediction methods at production sites. To solve this problem, we propose a prediction method for the reaching motion that neither requires the attachment of markers nor limits the placement of the 3D sensor. The proposed method first computes a feature vector of the reaching motion and then performs the prediction using random forest with the computed feature vector as input. Experimental results show that the proposed method can predict the reaching motion from the initial 50% of the movement of the worker with more than 80% accuracy in less than 5.2 [ms].

[1]  Sami Haddadin,et al.  A Hierarchical Human-Robot Interaction-Planning Framework for Task Allocation in Collaborative Industrial Assembly Processes , 2017, IEEE Robotics and Automation Letters.

[2]  Song Zhang,et al.  Superfast phase-shifting method for 3-D shape measurement. , 2010, Optics express.

[3]  Jun Kinugawa,et al.  Point Pair Feature-Based Pose Estimation with Multiple Edge Appearance Models (PPF-MEAM) for Robotic Bin Picking , 2018, Sensors.

[4]  Jun Kinugawa,et al.  2D Object Localization Based Point Pair Feature for Pose Estimation , 2018, 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO).

[5]  Ivan Lundberg,et al.  Safety of collaborative industrial robots: Certification possibilities for a collaborative assembly robot concept , 2011, 2011 IEEE International Symposium on Assembly and Manufacturing (ISAM).

[6]  A. AbuBaker,et al.  One Scan Connected Component Labeling Technique , 2007, 2007 IEEE International Conference on Signal Processing and Communications.

[7]  Kazuhiro Kosuge,et al.  Motion Control of Caster-Type Passive Mobile Robot with Servo Brakes , 2012, Adv. Robotics.

[8]  Shogo Arai,et al.  A Convolutional Neural Network for Point Cloud Instance Segmentation in Cluttered Scene Trained by Synthetic Data Without Color , 2020, IEEE Access.

[9]  Koichi Hashimoto,et al.  Model-based virtual visual servoing with point cloud data , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[10]  Ashwin P. Dani,et al.  Human intention inference and motion modeling using approximate E-M with online learning , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[11]  Dana Kulic,et al.  Incremental Learning, Clustering and Hierarchy Formation of Whole Body Motion Patterns using Adaptive Hidden Markov Chains , 2008, Int. J. Robotics Res..

[12]  Koichi Hashimoto,et al.  Feedback projection for 3D measurements under complex lighting conditions , 2017, 2017 American Control Conference (ACC).

[13]  Dmitry Berenson,et al.  Predicting human reaching motion in collaborative tasks using Inverse Optimal Control and iterative re-planning , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[14]  Zoltan-Csaba Marton,et al.  Tutorial: Point Cloud Library: Three-Dimensional Object Recognition and 6 DOF Pose Estimation , 2012, IEEE Robotics & Automation Magazine.

[15]  Toshiharu Mukai,et al.  Development of a nursing-care assistant robot RIBA that can lift a human in its arms , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[16]  Yun Jiang,et al.  Modeling High-Dimensional Humans for Activity Anticipation using Gaussian Process Latent CRFs , 2014, Robotics: Science and Systems.

[17]  Jun Kinugawa,et al.  Adaptive Task Scheduling for an Assembly Task Coworker Robot Based on Incremental Learning of Human's Motion Patterns , 2017, IEEE Robotics and Automation Letters.

[18]  Oussama Khatib,et al.  A depth space approach to human-robot collision avoidance , 2012, 2012 IEEE International Conference on Robotics and Automation.

[19]  Julie A. Shah,et al.  Fast target prediction of human reaching motion for cooperative human-robot manipulation tasks using time series classification , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[20]  Dmitry Berenson,et al.  Human-robot collaborative manipulation planning using early prediction of human motion , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[21]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[22]  Carme Torras,et al.  Learning Physical Collaborative Robot Behaviors From Human Demonstrations , 2016, IEEE Transactions on Robotics.

[23]  Radu Bogdan Rusu,et al.  Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments , 2010, KI - Künstliche Intelligenz.

[24]  Alin Albu-Schaffer,et al.  Dynamic Motion Planning for Robots in Partially Unknown Environments , 2011 .

[25]  Jos Elfring,et al.  Learning intentions for improved human motion prediction , 2013, 2013 16th International Conference on Advanced Robotics (ICAR).

[26]  Dmitry Berenson,et al.  A framework for unsupervised online human reaching motion recognition and early prediction , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[27]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[28]  Kazuhito Yokoi,et al.  Assembly Challenge: a robot competition of the Industrial Robotics Category, World Robot Summit – summary of the pre-competition in 2018* , 2019, Adv. Robotics.

[29]  Geir Hovland,et al.  Collision avoidance with potential fields based on parallel processing of 3D-point cloud data on the GPU , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[30]  N. Sharkey,et al.  Granny and the robots: ethical issues in robot care for the elderly , 2012, Ethics and Information Technology.

[31]  Shang-Hong Lai,et al.  3D object detection and pose estimation from depth image for robotic bin picking , 2014, 2014 IEEE International Conference on Automation Science and Engineering (CASE).

[32]  Gary M. Bone,et al.  Real-time 3D Collision Avoidance Method for Safe Human and Robot Coexistence , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[33]  Masahiro Fujita,et al.  What are the important technologies for bin picking? Technology analysis of robots in competitions based on a set of performance metrics , 2019, Adv. Robotics.