A synergy of agent components: social comparison for failure detection

1 Overview Recently, encouraging progress has been made in integrating independent components in complete agents for real-world environments. While such systems demonstrate component integration, they often do not explicitly utilize synergistic interactions, which allow each component to function beyond its original capabilities because of the presence of other components. This abstract presents an implemented illustration of such explicit component synergy and its usefulness in dynamic multi-agent environments. In such environments, agents often have three important abilities: (a) collaboration with other agents (teamwork), (b) monitoring the agent's own progress (execution monitoring), and (c) modeling other agents' beliefs/goals (agent-modeling). Generally, these capabilities are independently developed, and are integrated in a single system such that each component operates independently of the others, e.g., monitoring techniques do not take into account the modeled plans of other agents, etc. In contrast, we highlight a synergy between these three agent components that results in significant improvement in capabilities of each component: (a) The collaboration component constrains the search space for the agent-modeling component via maintenance of mutual beliefs and facilitates better modeling, (b) the modeling and collaboration components enable SOCFAD (Social Comparison for Failure Detection), a novel execution monitoring technique which uses other agents to detect and diagnose failures (the focus of this abstract), and (c) the monitoring component, using SOCFAD, detects failures in individual performance that affect coordination, and allows the collaboration component to replan. SOCFAD addresses the well known problem of agent execution monitoring in complex dynamic environments, e.g., [4]. This problem is exacerbated in multi-agent environments due to the added requirements for coordination. The complexity and unpredictability of these environments causes an explosion of state space complexity, which inhibits the ability of any designer to enumerate the correct response in each possible state in advance. For instance, it is generally difficult to predict when communication message will get lost, sensors return unreliable answers, etc. The agents are therefore presented with countless opportunities for failure, and must autonomously detect them and recover. To detect failures, an agent must have information about the ideal behavior expected of it. This ideal is compared to the agent's actual behavior to detect discrepancies indicating possible failure. Previous approaches to this problem (e.g., [4]) have focused on the designer or planner supplying the agent with redundant information, either in the form of explicitly specified execution-monitoring conditions, or a model of the agent itself which may be used for comparison. …

[1]  Milind Tambe,et al.  Agent Architectures for Flexible , 1997 .

[2]  Richard Reviewer-Granger Unified Theories of Cognition , 1991, Journal of Cognitive Neuroscience.

[3]  Milind Tambe,et al.  Agent Architectures for Flexible, Practical Teamwork , 1997, AAAI/IAAI.

[4]  P. Pandurang Nayak,et al.  A Model-Based Approach to Reactive Self-Configuring Systems , 1996, AAAI/IAAI, Vol. 2.

[5]  Milind Tambe,et al.  Tracking Dynamic Team Activity , 1996, AAAI/IAAI, Vol. 1.