Multi-Sensor Perception Strategy to Enhance Autonomy of Robotic Operation for Uncertain Peg-in-Hole Task

The peg-in-hole task with object feature uncertain is a typical case of robotic operation in the real-world unstructured environment. It is nontrivial to realize object perception and operational decisions autonomously, under the usual visual occlusion and real-time constraints of such tasks. In this paper, a Bayesian networks-based strategy is presented in order to seamlessly combine multiple heterogeneous senses data like humans. In the proposed strategy, an interactive exploration method implemented by hybrid Monte Carlo sampling algorithms and particle filtering is designed to identify the features’ estimated starting value, and the memory adjustment method and the inertial thinking method are introduced to correct the target position and shape features of the object respectively. Based on the Dempster–Shafer evidence theory (D-S theory), a fusion decision strategy is designed using probabilistic models of forces and positions, which guided the robot motion after each acquisition of the estimated features of the object. It also enables the robot to judge whether the desired operation target is achieved or the feature estimate needs to be updated. Meanwhile, the pliability model is introduced into repeatedly perform exploration, planning and execution steps to reduce interaction forces, the number of exploration. The effectiveness of the strategy is validated in simulations and in a physical robot task.

[1]  Hua Liu,et al.  Sensor-Based Control Using an Image Point and Distance Features for Rivet-in-Hole Insertion , 2020, IEEE Transactions on Industrial Electronics.

[2]  Silvio Savarese,et al.  Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks , 2019, IEEE Transactions on Robotics.

[3]  Danica Kragic,et al.  Learning tactile characterizations of object- and pose-specific grasps , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Shigeki Sugano,et al.  End-to-End Tactile Feedback Loop: From Soft Sensor Skin Over Deep GRU-Autoencoders to Tactile Stimulation , 2020, IEEE Robotics and Automation Letters.

[5]  Wan-Young Chung,et al.  Artificial Intelligence-Based Optimal Grasping Control , 2020, Sensors.

[6]  Jitendra Malik,et al.  More Than a Feeling: Learning to Grasp and Regrasp Using Vision and Touch , 2018, IEEE Robotics and Automation Letters.

[7]  Edward Adelson,et al.  Tracking objects with point clouds from vision and touch , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[8]  August G. Domel,et al.  A Tapered Soft Robotic Oropharyngeal Swab for Throat Testing: A New Way to Collect Sputa Samples , 2021, IEEE Robotics & Automation Magazine.

[9]  Yunjiang Lou,et al.  A Robotic Charging Scheme for Electric Vehicles Based on Monocular Vision and Force Perception , 2019, 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO).

[10]  Kimitoshi Yamazaki,et al.  Learning from Demonstration Based on a Mechanism to Utilize an Object’s Invisibility , 2019, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[11]  Paolo Paoletti,et al.  Generation of GelSight Tactile Images for Sim2Real Learning , 2021, IEEE Robotics and Automation Letters.

[12]  Ulrike Thomas,et al.  Multi Sensor Fusion in Robot Assembly Using Particle Filters , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[13]  Rui Song,et al.  Skill learning for robotic assembly based on visual perspectives and force sensing , 2021, Robotics Auton. Syst..

[14]  Sufyan Almajali,et al.  A Systematic Review on Fusion Techniques and Approaches Used in Applications , 2020, IEEE Access.

[15]  Wei Tian,et al.  A Measurement Method for Robot Peg-in-Hole Prealignment Based on Combined Two-Level Visual Sensors , 2021, IEEE Transactions on Instrumentation and Measurement.

[16]  Alin Albu-Schäffer,et al.  Combined Visual and Touch-based Sensing for the Autonomous Registration of Objects with Circular Features , 2019, 2019 19th International Conference on Advanced Robotics (ICAR).

[17]  David Watkins-Valls,et al.  Multi-Modal Geometric Learning for Grasping and Manipulation , 2018, 2019 International Conference on Robotics and Automation (ICRA).

[18]  Juncheng Zou Predictive visual control framework of mobile robot for solving occlusion , 2021, Neurocomputing.

[19]  Long-quan Liu,et al.  Combined and interactive effects of interference fit and preloads on composite joints , 2014 .

[20]  Vincent Duchaine,et al.  Determining Object Properties from Tactile Events During Grasp Failure , 2019, 2019 IEEE 15th International Conference on Automation Science and Engineering (CASE).

[21]  Alin Albu-Schäffer,et al.  Towards Autonomous Robotic Assembly: Using Combined Visual and Tactile Sensing for Adaptive Task Execution , 2021, Journal of Intelligent & Robotic Systems.

[22]  Nikhil R. Pal,et al.  Fuzzy Decision-Making Fuser (FDMF) for Integrating Human-Machine Autonomous (HMA) Systems with Adaptive Evidence Sources , 2017, Front. Neurosci..

[23]  Danica Kragic,et al.  Trends and challenges in robot manipulation , 2019, Science.

[24]  Daniel Medina,et al.  Bayesian and Neural Inference on LSTM-Based Object Recognition From Tactile and Kinesthetic Information , 2021, IEEE Robotics and Automation Letters.

[25]  Sergey Levine,et al.  Manipulation by Feel: Touch-Based Control with Deep Predictive Models , 2019, 2019 International Conference on Robotics and Automation (ICRA).

[26]  Fan Zhang,et al.  Probabilistic Real-Time User Posture Tracking for Personalized Robot-Assisted Dressing , 2019, IEEE Transactions on Robotics.