Advances in machine vision for flexible feeding of assembly parts

Abstract Human-robot collaboration can be used to share workload to form semi-automated production systems. Assembly operations are recognized with high potential to increase productivity by using the best skills of humans and robots in a combination. Components and parts to be assembled need to be structured and presented to the robot in a known location and orientation. The process of presenting parts to the robot for assembly tasks is referred to as parts feeding. Feeding system needs to be adaptable to dynamics of parts’ design, shape, location, and orientation to have flexibility in the production. The traditional automation methods for parts feeding are part-specific mechanical devices e.g. vibratory bowl feeders which are inflexible towards part variations. This comes as a hindrance in getting maximum advantage of the flexibility potential of human-robot collaboration in assembly. The recent years have seen advances in machine vision and has potential for feeding applications. This paper explores the developments in machine-vision for flexible feeding systems for human-robot assembly cells. A specification model is presented to develop a vision-guided flexible feeding system. Various vision-based feeding techniques are discussed and validated through an industrial case study. The results helped to compare the efficiency of each feeding technique for industrial application.

[1]  Eeva Järvenpää,et al.  Modelling Capabilities for Functional Configuration of Part Feeding Equipment , 2017 .

[2]  Maurizio Faccio,et al.  Implementation framework for a fully flexible assembly system (F-FAS) , 2015 .

[3]  Donald Braggins Vision today for assembly automation , 2006 .

[4]  Maurizio Faccio,et al.  Modelling and optimization of fully flexible assembly systems (F‐FAS) , 2013 .

[5]  Ramesh Raskar,et al.  Vision-guided Robot System for Picking Objects by Casting Shadows , 2010, Int. J. Robotics Res..

[6]  Luca Zanotti,et al.  Flexible assembly system for heat exchanger coils , 2011, ETFA2011.

[7]  Kensuke Harada,et al.  Teaching Robots to Do Object Assembly using Multi-modal 3D Vision , 2016, Neurocomputing.

[8]  Alexander Verl,et al.  Cooperation of human and machines in assembly lines , 2009 .

[9]  Maurizio Faccio,et al.  Fully flexible assembly systems (F‐FAS): a new concept in flexible automation , 2013 .

[10]  Keisuke Takayama,et al.  Simulation, Modeling, and Programming for Autonomous Robots , 2012, Lecture Notes in Computer Science.

[11]  Maurizio Faccio,et al.  Agility in assembly systems: a comparison model , 2017 .

[12]  Sukhan Lee,et al.  Stereo vision based automation for a bin-picking solution , 2012 .

[13]  Sangseung Kang,et al.  Vision-based bin picking system for industrial robotics applications , 2012, 2012 9th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI).

[14]  Jörg Stückler,et al.  Real-time object detection, localization and verification for fast robotic depalletizing , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[15]  Arne Bilberg,et al.  Digital twins of human robot collaboration in a production setting , 2018 .

[16]  Alain Bernard,et al.  Product Variety Management , 1998 .

[17]  Lihui Wang,et al.  Deep learning-based human motion recognition for predictive context-aware human-robot collaboration , 2018 .

[18]  Dejan Pangercic,et al.  Real-time CAD model matching for mobile manipulation and grasping , 2009, 2009 9th IEEE-RAS International Conference on Humanoid Robots.