A Hybrid Model for Situation Monitoring and Conflict Prediction in Human Supervised "Autonomous" Systems

The paper focuses on a key issue for human supervised “autonomous” systems, namely situation monitoring. The operator’s involvement within human-robot team is first described as the way they close the control and decision loops. Then a framework based on particle filtering and Petri nets is presented for hybrid numerical-symbolic situation monitoring and inconsistency and conflict prediction within the team. Human-robot team and the operator’s involvement Autonomy is not an end in itself in robotic systems. Autonomy is needed because we want robots to be able to cope with mission hazards when communications with the human operator are impossible (due to communication gaps, discretion requirements, or to the operator’s workload). Therefore adjustable autonomy must be considered to enable the robots to compensate for the operator’s neglect (Goodrich et al. 2001). The control of shared autonomy may be humaninitiated, a priori scripted or robot-initiated (Brookshire, Singh, & Simmons 2004). Whatever the case, implementing adjustable autonomy requires situation awareness (Endsley 2000) – including predicting what is likely to happen next – both from the operator’s and robot’s points of view (Drury, Scholtz, & Yanco 2003). A functional architecture that is worth considering when dealing with autonomy and operator’s roles within a humanrobot team is the double loop (Barrouil 1993), which parallels the symbolic decision loop (situation monitoring and replanning) with the classical numerical loop (estimation and control) see Fig. 1. Many papers have suggested autonomy levels for robots (Huang et al. 2004), human-agent teamwork (Bradshaw et al. 2003), UAVs1 (Clough 2002) and others have focused on the operator’s roles (Yanco & Drury 2002; Scholtz 2003). What we are suggesting here is that the operator’s involvement can be regarded as the way they “close” the loops (Fong, Thorpe, & Baur 2003). Let us distinguish three main autonomy levels for a single robot or UAV agent: Copyright c © 2006, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. Uninhabited Aerial Vehicles. Figure 1: Functional architecture of autonomous systems the double loop (Decisional autonomy (b)) 1. No autonomy the operator closes the numerical loop via direct perception from sensors and direct action on effectors (teleoperation); 2. Operating autonomy (a) the operator closes the numerical loop via control laws (e.g. heading, slope, speed, altitude...) (b) waypoints are defined; the operator monitors the execution (checks whether the waypoints are correctly reached) and deals with failures, i.e. embodies the whole decision loop; 3. Decisional autonomy (a) waypoints are recalculated autonomously if forbidden areas appear in the course of the mission; the operator is highly involved within the decision loop, i.e. inputs area features, checks whether recalculated waypoints are OK and deals with other failures; (b) autonomous situation assessment and replanning are performed; the operator may close the decision loop when needed and when possible (i.e. when communications are available). Remark 1 The operator closes the numerical or decision loop provided required communications with the robot are available. Remark 2 In case of emergency, a manual handover may be possible (provided required communications are available). Situation monitoring is a key issue in human-robot teams: indeed situation monitoring has to maintain both the operator’s situation awareness and situation assessment for the robot (Drury, Scholtz, & Yanco 2003), so that the task context should not be lost when the robot cedes control to the human or conversely (Brookshire, Singh, & Simmons 2004). What is more is that the global human-robot team must be predictible whatever may occur, e.g. failures, operator’s omissions or wrong moves. Therefore what is needed is a global situation monitoring to track and predict the behaviour of the human-robot team, according to the human’s involvement into the control loops, and possibly detect inconsistencies within the team. As it is already clear with the double-loop representation (see Fig. 1), human-robot team are hybrid systems, in so far as both numerical (continuous) and symbolic (discrete) parts are involved. It must be noticed that the discrete part is not a mere abstraction of the continuous part as it is the case in most of the hybrid system literature: indeed the discrete part mostly corresponds to how the operator interacts with the robot and sets configurations or modes. What is presented in the paper is a way to estimate and predict the states in such hybrid systems through a unified model, and how conflicts (possibly leading to dangerous situations) may be predicted within the team. After an overview of hybrid system estimation methods, the particle Petri net a joint model for situation monitoring in hybrid systems will be presented. Afterwards the estimation principles will be described and illustrated on the thermostat example. Finally the application of particle Petri net-based estimation to situation monitoring in human supervised “autonomous” systems will be dealt with. Hybrid State Estimation Estimating the state of a hybrid system is widely studied in the literature and involves a large amount of techniques, from numerical filters to network models. In (Veeraraghavan & Papanikolopoulos 2004) the estimation rests on a set of Kalman filters, each one tracking a linear mode of the system. The most probable states allow to determine the most probable filter and then the most probable mode of the system (i.e. the most probable behavior of a car like turning, accelerating, etc.) In the same way, (Koutsoukos, Kurien, & Zhao 2003) propose an estimator based both on hybrid automata to represent the mode evolution and on a particle filter to estimate the continuous state of the system. The estimated mode is then the most probable mode of the system with respect to the estimated continuous states. A similar principle is applied in (Hofbaur W 2. the fact that an analysis on the consistency of the discrete and continuous states is difficult to perform as the estimations on discrete and continuous states are aggregated within the probability distribution. In the same way, the analysis of conflicts, or conversely consistency, is mainly based on the study of continuous variables. In (Benazera & Trave-Massuyes 2003) the hybrid system must satisfy constraints that are checked on the continuous estimated states of the system. (Del Vecchio & Murray 2004) use lattices to identify the discrete mode of a hybrid system when only continuous variables are observed. In (Tomlin et al. 2003), the reachability analysis of continuous states, based on hybrid automata, allows to identify safe and dangerous behaviors of the system and is applied to an aircraft collision problem. Nielsen and Jensen (Nielsen & Jensen 2005) define a conflict measure on the estimated state of a Bayesian network; nevertheless this method still suffers from the need to define a measure on totally uncertain states, and from the fact that the conflict measure is continuous, which leads to a threshold effect. In (Lesire & Tessier 2005) an aircraft procedure and pilot’s actions are jointly modeled using a particle Petri net that allows the procedure to be simulated using a Monte-Carlo method and the results to be analysed using the Petri net properties. Hence only the nominal procedure is modeled and the analysis is based on qualitative properties and does not involve any continuous measure. The monitoring system presented in this paper is based on the later work: it allows both the estimation to be computed and the consistency of the estimated states to be analysed without defining a priori measures on unknown states. Indeed it is the structure of the Petri net-based model itself which allows the consistency to be checked. The next section mentions the main definitions of particle Petri nets.

[1]  Hassane Alla,et al.  Discrete, continuous, and hybrid Petri Nets , 2004 .

[2]  Thomas D. Nielsen,et al.  Alert Systems for Production Plants: A Methodology Based on Conflict Analysis , 2005, ECSQARU.

[3]  Debra Schreckenghost,et al.  Augmenting Automated Control Software to Interact with Multiple Humans , 2004 .

[4]  Jean Scholtz,et al.  Awareness in human-robot interactions , 2003, SMC'03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme - System Security and Assurance (Cat. No.03CH37483).

[5]  Neil J. Gordon,et al.  A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking , 2002, IEEE Trans. Signal Process..

[6]  Jean Scholtz,et al.  Theory and evaluation of human robot interactions , 2003, 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the.

[7]  Michael A. Goodrich,et al.  Experiments in adjustable autonomy , 2001, 2001 IEEE International Conference on Systems, Man and Cybernetics. e-Systems and e-Man for Cybernetics in Cyberspace (Cat.No.01CH37236).

[8]  Yuji Matsumoto,et al.  Particle Filter , 2022 .

[9]  Mica R. Endsley,et al.  Theoretical Underpinnings of Situation Awareness, A Critical Review , 2000 .

[10]  Holly A. Yanco,et al.  A Taxonomy for Human-Robot Interaction , 2002 .

[11]  Nando de Freitas,et al.  Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks , 2000, UAI.

[12]  Alexandre M. Bayen,et al.  Computational techniques for the verification of hybrid systems , 2003, Proc. IEEE.

[13]  Maarten Sierhuis,et al.  Adjustable Autonomy and Human-Agent Teamwork in Practice: An Interim Report on Space Applications , 2003 .

[14]  Uri Lerner,et al.  Inference in Hybrid Networks: Theoretical Limits and Practical Algorithms , 2001, UAI.

[15]  Nikolaos Papanikolopoulos,et al.  Combining multiple tracking modalities for vehicle tracking at traffic intersections , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[16]  Sanjiv Singh,et al.  Preliminary Results in Sliding Autonomy for Coordinated Teams , 2004 .

[17]  James S. Albus,et al.  AUTONOMY MEASURES FOR ROBOTS , 2004 .

[18]  Feng Zhao,et al.  Estimation of Distributed Hybrid Systems Using Particle Filtering Methods , 2003, HSCC.

[19]  Bruce T Clough,et al.  Metrics, Schmetrics! How The Heck Do You Determine A UAV's Autonomy Anyway , 2002 .

[20]  Daniel Pacholczyk,et al.  A Symbolic Approach To Uncertainty Management , 2000, Applied Intelligence.

[21]  Richard M. Murray,et al.  Discrete State Estimators for a Class of Hybrid Systems on a Lattice , 2004, HSCC.

[22]  Terrence Fong,et al.  Robot, asker of questions , 2003, Robotics Auton. Syst..

[23]  Louise Travé-Massuyès,et al.  The Consistency Approach to the On-Line Prediction of Hybrid System Configurations1 , 2003, ADHS.

[24]  Henry Hexmoor,et al.  Agent Autonomy , 2003, Multiagent Systems, Artificial Societies, and Simulated Organizations.

[25]  Brian C. Williams,et al.  Mode Estimation of Probabilistic Hybrid Systems , 2002, HSCC.

[26]  Odile Papini,et al.  Revision of Partially Ordered Information: Axiomatization, Semantics and Iteration , 2005, IJCAI.

[27]  Charles Lesire,et al.  Particle Petri Nets for Aircraft Procedure Monitoring Under Uncertainty , 2005, ICATPN.