Human-Machine Collaborative Systems: Intelligent Virtual Fixtures and Space Applications

Human-Machine Collaborative Systems (HMCSs) are able to sense human operator intent and provide contextappropriate assistance to improve performance in applications ranging from space exploration to minimally invasive surgery. The underlying technology in our HMCSs is the virtual fixture. Virtual fixtures are software-generated force and position signals applied to human operators in order to improve the safety, accuracy, and speed of robot-assisted manipulation tasks. They are effective and intuitive because they capitalize on both the accuracy of robotic systems and the intelligence of human operators. In this position paper, we describe our HMCS technology and its potential for application in operational tasks in space. I. HUMAN-MACHINE COLLABORATIVE SYSTEMS The goal of Human-Machine Collaborative Systems (HMCS) project is to investigate human-machine cooperative execution of tool-based manipulation activities. The motivation for collaborative systems is based on evidence suggesting that humans operating in collaboration with robotic mechanisms can take advantage of robotic speed and precision, but avoid the difficulties of full autonomy by retaining the human “in the loop” for essential decision making and/or physical guidance [14]. Our previous work on HMCS has aimed at microsurgery [3], minimally invasive surgery [1], cell manipulation [6] and several fine-scale manufacturing tasks [7], but the basic principles apply broadly in many domains. In this paper, we explore possible applications in space. Our approach to HMCS focuses on three inter-related problems: (1) Synthesis: developing systems tools necessary for describing and implementing HMCSs; (2) Modeling: given sensor traces of a human performing a task, segmenting those traces into logical task components and/or measuring the compatibility of a given HMCS structure to that sequence of components; and (3) Validation: measuring and evaluating HMCS performance. Figure 1 depicts the high-level structure of our HMCS framework, which includes three main components: the human, the augmentation system, and an observer. We assume that a user primarily manipulates the environment using the augmentation system, although unaided manipulation may This work is partially supported by National Science Foundation Grants #EEC-9731478 and #ITR-0205318. take place in some settings (dashed line). The user is able to visually observe the tool and surrounding environment, and directs an augmentation device using force and position commands. The system may also have access to endpoint force data, targeted visual data and other application-dependent sensors, e.g., intra-operative imaging. The role of the observer is to assess available sensor data (including haptic feedback from the user) and initiate, modify, or terminate various forms of assistance. Optional direct interaction between the observer and the user may also be used to convey information or otherwise synchronize their interaction. The basic notion of HMCS is clearly related to traditional teleoperation, although the goal in HMCS is not to “remotize” the operator [5] but rather to provide appropriate levels of operator assistance depending on context. At one extreme, shared control [4] can be viewed as an HMCS for manipulation tasks in which some degrees of freedom are controlled by machine and others by the human. At the other extreme, supervisory control [13] gives a more discrete, highlevel notion of human-machine interaction. Our notion of HMCS essentially incorporates both views, combining them with broader questions of modeling manipulation activities consisting of multiple steps and varying level of assistance, and validating those models against human performance data. Fig. 1. Structure of a Human-Machine Collaborative System. II. VIRTUAL FIXTURES An important component of our HMCS framework is the virtual fixture. Virtual fixtures are software-generated force and position signals applied to human operators via robotic devices. Virtual fixtures help humans perform robot-assisted manipulation tasks by limiting movement into restricted regions and/or influencing movement along desired paths. By capitalizing on the accuracy of robotic systems, while maintaining a degree of operator control, human-machine systems with virtual fixtures can achieve safer and faster operation. To visualize the benefits of virtual fixtures, consider a common physical fixture: a ruler. A straight line drawn by a human with the help of a ruler is drawn faster and straighter than a line drawn freehand. Similarly, a robot can apply forces or positions to a human operator to help him or her draw a straight line. However, a robot (or haptic device) has the additional flexibility to provide assistance of varying type, level, and geometry. Virtual fixtures show great promise for tasks that require better-than-human levels of accuracy and precision, but also require the intelligence provided by a human directly in the control loop. Traditional cooperative manipulation or telemanipulation systems make up for many of the limitations of autonomous robots (e.g., limitations in artificial intelligence, sensor-data interpretation, and environment modeling), but the performance of such systems is still fundamentally constrained by human capabilities. Virtual fixtures, on the other hand, provide an excellent balance between autonomy and direct human control. Virtual fixtures can act as safety constraints by keeping the manipulator from entering into potentially dangerous regions of the workspace, or as macros that assist a human user in carrying out a structured task. Applications for virtual fixtures include robot-assisted surgery, difficult assembly tasks, and inspection and manipulation tasks in dangerous environments. Virtual fixtures can be applied to two types of robotic manipulation systems: cooperative manipulators and telemanipulators. In cooperative manipulation, the human uses a robotic device to directly manipulate an environment. In telemanipulation, a human operator manipulates a master robotic device, and a slave robot manipulates an environment while following the commands of the master. In general, the robots used in these systems can be of the impedance or the admittance type [2]. Robots of the impedance type, such as typical haptic devices, are backdrivable with low friction and inertia, and have force-source actuators. Robots of the admittance type, such as typical industrial robots, are nonbackdrivable and have velocitysource actuators. The velocity is controlled with a highbandwidth low-level controller, and is assumed to be independent of applied external forces. Figure 2(a) shows the Johns Hopkins University Steady-Hand Robot [15], an admittancetype cooperative manipulator designed for microsurgical procedures. Figure 2(b) shows the da Vinci R © Surgical System (Intuitive Surgical, Inc.), an impedance-type telemanipulator (a)

[1]  Blake Hannaford,et al.  Stable haptic interaction with virtual environments , 1999, IEEE Trans. Robotics Autom..

[2]  Shahram Payandeh,et al.  On application of virtual fixtures as an aid for telemanipulation and training , 2002, Proceedings 10th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. HAPTICS 2002.

[3]  J. Edward Colgate,et al.  Cobot implementation of virtual paths and 3D virtual surfaces , 2003, IEEE Trans. Robotics Autom..

[4]  Gerd Hirzinger,et al.  A Sensor-based Telerobotic System for the Space Robot Experiment ROTEX , 1991, ISER.

[5]  Allison M. Okamura,et al.  Virtual fixtures for bilateral telemanipulation , 2006 .

[6]  Oussama Khatib,et al.  Haptically Augmented Teleoperation , 2000, ISER.

[7]  Russell H. Taylor,et al.  Simple Biomanipulation Tasks with 'Steady Hand' Cooperative Manipulator , 2003, MICCAI.

[8]  Russell H. Taylor,et al.  A Steady-Hand Robotic System for Microsurgical Augmentation , 1999, Int. J. Robotics Res..

[9]  Gregory D. Hager,et al.  Vision-assisted control for manipulation using virtual fixtures , 2001, IEEE Transactions on Robotics.

[10]  J. Edward Colgate,et al.  Enhanced teleoperation for D&D , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[11]  Thomas B. Sheridan,et al.  Telerobotics , 1989, Autom..

[12]  Russell H. Taylor,et al.  A Steady-Hand Robotic System for Microsurgical Augmentation , 1999, Int. J. Robotics Res..

[13]  Gregory D. Hager,et al.  Automatic Detection and Segmentation of Robot-Assisted Surgical Motions , 2005, MICCAI.

[14]  Thomas B. Sheridan,et al.  Telerobotics, Automation, and Human Supervisory Control , 2003 .

[15]  Ève Coste-Manière,et al.  Haptically augmented teleoperation , 2000, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164).

[16]  Robert D. Howe,et al.  Virtual Fixtures for Robotic Cardiac Surgery , 2001, MICCAI.

[17]  Tzyh Jong Tarn,et al.  Fusion of human and machine intelligence for telerobotic systems , 1995, Proceedings of 1995 IEEE International Conference on Robotics and Automation.

[18]  Louis B. Rosenberg,et al.  Virtual fixtures: Perceptual tools for telerobotic manipulation , 1993, Proceedings of IEEE Virtual Reality Annual International Symposium.