Self-adaptive Cobots in Cyber-Physical Production Systems

Absolute automation in certain industries, such as the automotive industry, has proven to be disadvantageous. Robots are fairly capable when performing tasks that are repetitive and demand precision. However, a hybrid solution comprised of the adaptability and resourcefulness of humans cooperating, in the same task, with the precision and efficiency of machines is the next step for automation. Manipulators, however, lack self-adaptability and true collaborative behaviour. And so, through the integration of vision systems, manipulators can perceive their environment and also understand complex interactions. In this paper, a vision-based collaborative proof-of-concept framework is proposed using the Kinect v2, a UR5 robotic manipulator and MATLAB. This framework implements 3 behavioural modes, 1) a Self-Adaptive mode for obstacle detection and avoidance, 2) a Collaborative mode for physical human-robot interaction and 3) a standby Safe mode. These modes are activated with recourse to gestures, by virtue of the body tracking and gesture recognition algorithm of the Kinect v2. Additionally, to allow self-recognition of the robot, the Region Growing segmentation is combined with the UR5’s Forward Kinematics for precise, near real-time segmentation. Furthermore, self-adaptive reactive behaviour is implemented by using artificial repulsive action for the manipulator’s end-effector. Reaction times were tested for all three modes, being that Collaborative and Safe mode would take up to 5 seconds to accomplish the movement, while Self-Adaptive mode could take up to 10 seconds between reactions.