In this paper, we describe our ongoing work to develop cooperative control of NASA's R5 Valkyrie humanoid robot for performing dexterous manipulation tasks inside gloveboxes commonly found in many nuclear facilities. These tasks can be physically demanding and provide some element of risk to the operator when done by a person in situ. For example, if a glove is ruptured, the operator could be exposed to radioactive material. In many cases, the operator has low visibility and is unable to reach the entire task space, requiring the use of additional tools located inside the glovebox. Such tasks include cleaning particulate from inside the glovebox via sweeping or vacuuming, separating a specific amount of a compound to be weighed on a scale, or grasping and manipulating objects inside the glovebox. There is potential to move the operator to a nearby, safe location and instead place a humanoid robot in the potentially hazardous environment. However, teleoperating a humanoid robot to perform dexterous tasks at a comparable level to a human hand remains a difficult problem. Previous work for controlling humanoid robots often involves one or more operators using a standard 2D display with a mouse and keyboard as controllers. Successful interfaces use sensor fusion to provide information to the operator for increased situation awareness, but these designs have limitations. Gaining proper situation awareness by visualizing 3D information on a 2D screen requires time and increases the cognitive load on the user. Instead, if the operator is able to visualize and control the robot properly in three dimensions, it can increase situation and task awareness, reduce task time, reduce the chance of mistakes, and increase the likelihood of overall task success. We propose a two-part system that combines an HTC Vive virtual reality headset with either the Vive handheld controllers, or the Manus VR wearable gloves as the primary control. The operator wears the headset in a remote location and can visualize a reconstruction of the glovebox, created live by sensor scans from the robot and with sensors located inside the glovebox for a perspective traditionally unavailable to operators. By using the controllers or gloves to control the humanoid robots hands directly, they can plan actions in the virtual reconstruction. When the operator is satisfied with the plan, the actions are sent to the real robot. To test this system we have created a mockup of a glovebox that is accessible by Valkyrie, as well as several tasks that are a subsample of the tasks that might be required when working in a real glovebox. BACKGROUND Gloveboxes are often used to handle nuclear material or perform experiments in a nuclear environment. These gloveboxes are enclosed structures designed to safely house radioactive material, while providing access to allowing safe access to professionals via ports on the side with built in gloves. The tasks that are performed inside these gloveboxes vary widely, ranging from measuring compounds, using electrical equipment and numerous other tasks often involving fine manipulation of objects. In addition, these gloveboxes require significant maintenance with tasks such as cleaning and removing excess refuse. There are many safety features built into their design and protocol for use, but accidents can still occur WM2018 Conference, March 18 – 27, 2018, Phoenix, Arizona, USA 2 [1]. When an accident occurs, the operators in the immediate vicinity are at the greatest risk, so there is a desire to perform necessary experiments or maintenance tasks with the operators in a remote location. One solution is to deploy a robotic agent to operate the glovebox, with the supervisors and operators in a nearby, remote location. If the robot is able to perform the necessary tasks in a safe and reliable manner, this would increase the safety of the operators in the event that something does go wrong. A typical glovebox would be a rectangular compartment with two glove slots located such that a person standing would be easily able to put both arms inside, however in reality gloveboxes vary widely. They can range in size from small workspaces to as large as a room. In addition, while two glove ports located at a specific height approximately shoulder-width apart is a common layout, some only have a single porthole, and some have many portholes located at different heights and positions. When our team toured the Savannah River Site (SRS) in South Carolina, our hosts detailed the wide range of tasks and environments where a glovebox-capable robot could be useful, ranging from measuring compounds, using electrical equipment, and cleaning or maintenance tasks. With these tasks in mind, a robotic solution needs to be able to handle a wide range of situations that could arise. Figure 1. Example glovebox setups in use (Left from [2], Right from [3]). To meet the task requirements, the robot would need to have manipulators capable of grasping and using tools commonly used in a constrained glovebox environment. The robot would also need to be able to position itself and possibly move between different glove portholes to perform the tasks as required. One proposed robotic platform that could easily change its position would be a humanoid robot. To test this case, the team is using the R5 Valkyrie created by NASA [4]. The R5 Valkyrie stands at 6 feet (1.83 meters) tall, with two 7 degree of freedom (DOF) arms and 6 DOF hands. Her hands are shaped very similar to a person’s, with 3 fingers and an opposable thumb. This means that she is able to grasp and operate similar tools to that of a human, as well as operate in similar environments with minimal redesign. INTRODUCTION While significant research has been conducted with robots in domains such as telepresence, homecare, and warehouse delivery systems, by comparison, controlling humanoid robots is far less explored. The largest exploration of the use of humanoid robots was conducted during the DARPA Robotics Challenge (DRC) where teams competed to perform tasks, such as opening a door, turning a valve, and walking up WM2018 Conference, March 18 – 27, 2018, Phoenix, Arizona, USA 3 stairs [5]. One lesson established by research is that full autonomy can be very time consuming to implement and adapt to new situations [6]. However, by using a shared control strategy, where some components are handled autonomously and some are handled by the human operator, the benefits of each can be maximized while reducing development time [7]. Automated perception is an example of a task that is very difficult to work with in changing environments, yet tends to be trivial and quick for human operators with the right information. Even if the final goal is an autonomous solution, it can be desirable to start with a skilled operator first. With this in mind, we are first pursuing a shared control solution, where most of the decision making is performed by a skilled knowledgeable operator. The interface therefore needs to present the information and controls to allow the operator to perform their duties to a similar level to as if they were actually there. INTERFACE DESIGN Our proposed solution is to allow a skilled operator in a remote location to control the robot by a virtual reality (VR) headset. Using a commercially available headset, called the HTC Vive, and combined optionally with the Manus VR gloves, the operator can visualize and control Valkyrie. The HTC Vive is a VR headset that has built-in head tracking for both position and orientation [8], which allows the operator to navigate around a virtual reconstruction of the world by physically moving around in their remote open space. Doing so can allow for quick and accurate mental reconstruction of the remote world where the robot is located. The HTC Vive comes with two controllers, one for each hand, that each have a joystick, several buttons, and the same built-in tracking as the headset. As an alternative to the handheld controllers, the operator can wear the Manus VR gloves which allow the system to accurately track the operator's fingers. By combining this with tracking sensors attached to the wrist, the team can track the position of the operator's hand and fingers. With this entire setup, the operator can visualize and interact with a virtual reconstruction of what the robot sees. Figure 2. Using the Vive controllers while viewing a 3D model of the robot. WM2018 Conference, March 18 – 27, 2018, Phoenix, Arizona, USA 4 Egocentric and Exocentric Design VR has enormous potential for a variety of ways for interacting with a robot, some of which are simply recreating concepts from traditional interfaces and some of which are possible only in a system like VR. To help categorize the different types of controls and visualizations, we break them down to either egocentric or exocentric. Egocentric interfaces, or interfaces where one sees the world from the position of the robot, tend to be better for navigation type tasks. Exocentric interfaces, or interfaces where one sees the world from an external point of view, tend to be better for understanding the environment’s structure [7]. Many interfaces will combine elements of both, or otherwise allow the operator to switch between them, depending on which works better for the task. The team has incorporated this design by allowing the operator to switch between an egocentric or exocentric viewpoint. One example of the system working is where the operator starts out as a disembodied avatar, able to navigate around the virtual world at will. They can see the robot’s position, as well as the information displayed by sensors such as the point-cloud generated by Valkyrie’s lidar sensor. Using this information, the operator can build an accurate mental model of the area and plan out their tasks. Then the operator can switch to an egocentric view, where they are seeing the world directly from the perspective of the robot. Her
[1]
Twan Koolen,et al.
Team IHMC's Lessons Learned from the DARPA Robotics Challenge Trials
,
2015,
J. Field Robotics.
[2]
Jean Scholtz,et al.
Analysis of human–robot interaction at the DARPA Robotics Challenge Finals
,
2017,
Int. J. Robotics Res..
[3]
François Michaud,et al.
Egocentric and exocentric teleoperation interface using real-time, 3D video projection
,
2009,
2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[4]
Maurice Fallon.
Perception and estimation challenges for humanoid robotics: DARPA Robotics Challenge and NASA Valkyrie
,
2016,
Security + Defence.