Glovebox Handling of High-Consequence Materials with Super Baxter and Gesture-Based Programming-18598

The handling of high-consequence materials is a difficult task requiring safe object manipulation while avoiding the risk of contamination or spillage. Specific operations including opening containers, segregating the wastes, and repackaging are required to be safely executed inside contained spaces such as gloveboxes. However, the workers’ ability and dexterity to manipulate objects through thick protective gloves are compromised in many aspects. The fixed position of the glovebox’s arm ports restricts the movements of the workers’ arms, which also makes it hard to lift heavy objects. Further, the operational workspace inside the glovebox is restricted by the gloves’ reachability. Safety of workers is the paramount concern in glovebox operations and the very reason for their existence. Sharp edges and tools increases the risk of glove punctures, which may expose the operator to chemicals and radiation and risks contamination in the vicinity outside the glovebox. The operators are also affected by ergonomic stressors due to prolonged and repetitive operations. To achieve a high degree of human safety, robotic solutions to handle high-consequence materials inside the glovebox are desirable, as they remove the operators from the hazards listed above. However, robots, in general, lack the degree of adaptability necessary for most high-consequence material handling tasks as these tasks are often unstructured or highly variable. Likewise, most human operators, while highly skilled in the tasks at hand (task expertise), lack the robot programming skills necessary to adapt the robot’s motions to the variations (programming expertise). A way to bridge this divide is to make robot programming more intuitive to human operators. Since humans naturally teach one another through demonstration and learning, robotic “programming by demonstration” paradigms have begun to appear to reduce the burden of robot reprogramming. Gesture-Based Programming attempts to achieve that by enabling a robot to observe the normal actions and affordances of a human performing a task to learn to map those onto the skills of the robot. In the end, this approach permits benefits from both the human and the robot: the human’s adaptive decision making and the robot’s resilience to high-consequence material. Robotic solutions assuringly provide protection for the operators, however their autonomy brings about other potential risks. Artificial Intelligence is not perfect; it is a black box, which can produce results, but the full extent of those results may not be known precisely. In self-driving cars, a hypothetical AI problem is the “green man detector.” Since self-driving cars have presumably never seen a green man, how can we know it won’t veer toward him the first time it sees a green man? Likewise, even though GbP provides an open method for safe task execution replicating a human, incorrect task inferences can lead to potential collisions or unintended operations. Thus, a hardware safety measure is proposed to provide reliable operation inside the glovebox, and true safe operations are achieved by mechanically limiting the operational range of each arm link. The operational range is computed and validated through exhaustive offline simulations, which can be streamlined by affordable computing power. WM2018 Conference, March 18 22, 2018, Phoenix, Arizona, USA 2 Super Baxter is one such robot which incorporates these capabilities. It is a human-like, bimanual robot, being developed at the Collaborative Robotics Lab (CRL) in conjunction with the Dept. of Energy/Environmental Management, Rethink Robotics, Barrett Technology, and the NSF Center for Robots and Sensors for the Human Well-Being (RoSe-HUB). Super Baxter is envisioned to represent the next generation of collaborative robots that will be intrinsically human-safe, as well as exceptionally human-intuitive. In summary, Super Baxter will enable safe glovebox operations through low-shot Gesture based Programming incorporated with hardware safety measures. Two aspects of safety will be discussed in the paper. First, the safety of the operator achieved by allowing him to program Super Baxter remotely through Gesture based Programming. Second, the safety of the glovebox, achieved by incorporating hardware safety measures using joint limits. Further, the capability of Super Baxter on Gesture based Programming is demonstrated through typical object manipulation tasks in glovebox operations.

[1]  Jean-Claude Latombe,et al.  An Approach to Automatic Robot Programming Based on Inductive Learning , 1984 .

[2]  Masayuki Inaba,et al.  Learning by watching: extracting reusable task knowledge from visual observation of human performance , 1994, IEEE Trans. Robotics Autom..

[3]  Ronald A. Cole,et al.  ExpressionBot: An emotive lifelike robotic face for face-to-face communication , 2014, 2014 IEEE-RAS International Conference on Humanoid Robots.

[4]  Xiaotian Wang,et al.  Dynamic Emotion-Based Human-Robot Collaborative Assembly in Manufacturing : The Preliminary Concepts , 2014 .

[5]  M. Willoughby-Thomas Occupational health and safety. , 1981, The Australian nurses' journal. Royal Australian Nursing Federation.

[6]  S. Münch,et al.  Robot Programming by Demonstration (RPD) - Using Machine Learning and User Interaction Methods for the Development of Easy and Comfortable Robot Programming Systems , 2000 .

[7]  Hae Won Park,et al.  Flat vs. Expressive Storytelling: Young Children’s Learning and Retention of a Social Robot’s Narrative , 2017, Front. Hum. Neurosci..

[8]  S. Drucker,et al.  The Role of Eye Gaze in Avatar Mediated Conversational Interfaces , 2000 .

[9]  Masayuki Inaba,et al.  Situation Recognition and Behavior Induction based on Geometric Symbol Representation of Multimodal Sensorimotor Patterns , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[10]  Michael E. Cournoyer,et al.  Safety observation contributions to a glovebox safety program , 2011 .

[11]  Chrystopher L. Nehaniv,et al.  Imitation as a Dual-Route Process Featuring Predictive and Learning Components: A Biologically Plausible Computational Model , 2002 .

[12]  Pradeep K. Khosla,et al.  Tactile gestures for human/robot interaction , 1995, Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots.

[13]  Ronald A. Cole,et al.  My science tutor: A conversational multimedia virtual tutor for elementary school science , 2011, TSLP.

[14]  Juan Pablo Wachs,et al.  Gestonurse: A multimodal robotic scrub nurse , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[15]  Juan Pablo Wachs,et al.  What Makes a Gesture a Gesture? Neural Signatures Involved in Gesture Recognition , 2017, 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017).

[16]  Brett Browning,et al.  A survey of robot learning from demonstration , 2009, Robotics Auton. Syst..

[17]  Chrystopher L. Nehaniv,et al.  Imitation with ALICE: learning to imitate corresponding actions across dissimilar embodiments , 2002, IEEE Trans. Syst. Man Cybern. Part A.

[18]  William R. Hamel,et al.  Robotics in Hazardous Applications , 2016, Springer Handbook of Robotics, 2nd Ed..

[19]  Frederic Delaunay,et al.  A retro-projected robotic head for social human-robot interaction , 2016 .

[20]  Pradeep K. Khosla,et al.  Gesture-Based Programming, Part 1: A Multi-Agent Approach , 1996 .

[21]  William R. Hamel,et al.  Robotics in Hazardous Applications , 2008, Springer Handbook of Robotics, 2nd Ed..

[22]  Mohammad H. Mahoor,et al.  Facial Expression Recognition Using Enhanced Deep 3 D Convolutional Neural Networks , .

[23]  Mohammad H. Mahoor,et al.  Fitting distal limb segments for accurate skeletonization in human action recognition , 2012, J. Ambient Intell. Smart Environ..

[24]  Yanzhe Cui,et al.  Morphing Bus: A New Paradigm in Peripheral Interconnect Bus , 2014, IEEE Transactions on Components, Packaging and Manufacturing Technology.

[25]  Jack Lyle Thompson,et al.  Redesigning the human-robot interface : intuitive teleoperation of anthropomorphic robots , 2014 .

[26]  Michael E. Cournoyer,et al.  Investigation of injury/illness data at a nuclear facility , 2011 .

[27]  Brett Browning,et al.  Teacher feedback to scaffold and refine demonstrated motion primitives on a mobile robot , 2011, Robotics Auton. Syst..

[28]  Pradeep K. Khosla,et al.  A multi-agent system for programming robots by human demonstration , 2001, Integr. Comput. Aided Eng..