We have taken up the challenge of integrating telemanipulation technology and autonomous system technology. We are seeking methods for integration at a fundamental rather than an ad-hoc level. We believe that success in this effort can open up new space and defense applications now beyond reach of either technology alone. In our presentation at this symposium, we introduce a series of concepts for "tele-autonomous" systems. The concepts involve new system architectures and associated new system interface controls including "time clutches", "position clutches" and "time brakes". v Takeq together, the concepts enable effective, efficient intermingling of real-time cognition and manipulation tasks performed by either humans or machines. The concepts also yield simple mechanisms and protocols for handoffs of such tasks between multiple agents. In this presentation we focus primarily on the tutorial introduction of the basic tele-automation concepts. We then briefly describe our environment for exploring this new technology and the results of our initial experiments. Further details concerning tele-autonomous system architecture and our initial experimental results can be found in an attached reference [CON87]. ?lis presentation is based on recent work described in a paper [CON871 to be published in the Proceedings of the IEEE International Conference on Robotics and Automation, March 30, 1987. A preprint of that paper is included with this AIAAINASANSAF Symposium preprint. Our presentation is funhcr supplemented with a University of Michigan Robotics Research Laboratory videorepon [CON87a]. Copyright 0 1 9 8 7 by the iimeriran I n S t i t u t e of Aeronautic4 and Astronaut ics , Inc. All r z g h t s reserved. INTRODUCTION We are seeking simple, generic methods for intermingling and integrating telemanipulation and autonomous systems technology. Now, you might ask, why would we want to do that? First, we wish to provide more effective systems for autonomous environmental manipulation. Consider an AI cognition system embedded within an overall perception-cognition-action system. Many tasks of interest will involve perceptioncognitionaction computing delays on the order of fractions of a second or seconds. How can we deal with such delays, when the basic behavioral acts to be done to complete a manipulation task themselves require only on the order of fractions of a second or seconds? Presently we require substantial environmental knowledge and then piece together preprogrammed forms of interactions to cope with such delays. When that isn't possible, we fall back on a rather halting, stumbling form of perceive-thiik-act cycling, where the perception to action delays are contained in each basic behavioral act. Could we get around this somehow? After all, animals often perform manipulations with the aid of visualizations out just in front of their real time actions. Could we mechanize something like that? The second challenge is to provide protocols for the interaction between multiple autonomous manipulation agents. Consider an ALV driving down a remote road. It suddenly encounters uncertain footing, and doesn't have sufficient exploratory behaviors and learning capabilities to get itself out of trouble. We know that AI will not soon be able to handle all the cognitive tasks and especially not all the manipulation tasks to get an ALV out of this kind of trouble. But how can we enable a human to easily "slip into the cockpit" and take over in mid-manuever
[1]
Lynn Conway,et al.
THE MPC ADVENTURES: Experiences with the Generation of VLSI Design and Implementation Methodologies
,
1982
.
[2]
Richard A. Volz,et al.
Tele-autonomous systems: Methods and architectures for intermingling autonomous and telerobotic technology
,
1987,
Proceedings. 1987 IEEE International Conference on Robotics and Automation.
[3]
Allen Newell,et al.
The psychology of human-computer interaction
,
1983
.
[4]
Thomas B. Sheridan,et al.
Human supervisory control of robot systems
,
1986,
Proceedings. 1986 IEEE International Conference on Robotics and Automation.