Introduction to the special issue on agent autonomy in groups

This special issue of the Connection Science journal features ten papers on agent autonomy. The introduction describes the motivation for the special issue and briefly overviews the contributions. The papers in this volume are revised and extended version of selected papers from a workshop that was held in July 2002, in Edmonton, Canada, in conjunction with AAAI 2002 conference. This workshop followed an IJCAI 2001 workshop with the same title. Autonomy is a characterizing notion of agents but a one-size fit all definition does not exist. The desire to build agents that exhibit a satisfactory quality of autonomy has included agents that have a long life, are highly independent, can harmonize their goals and actions with humans and other agents, and are generally socially adept. In 2002, our workshop focus was not only to continue to understand salient social notions in agent interaction that involve autonomy but scaling issues of social networks on inter-group interactions. We explored theories that synthesized the inter-agent interaction into unified models as well as derived and implied attitudes that are beyond the immediate and direct inter-agent attitudes, which play a big role in balance of attitudes among agents in a group. As in 2001, we had researchers in multiagency as well as human-agent and human-robot interactions. In agent-agent interaction, the agents are designed to change their interaction in order to optimize local qualities such as cost or system qualities such as coherence. In organized groups, agents are designed to model the organizational structure and the concerns are mostly with deontic concepts. It is clearly important for agents to understand human preferences and guidance in complex systems. This involves many issues from abilities to comprehend issues related to delegation in natural language to understanding human emotions. Presentations covered several space systems and a large naval application. Discussions of applied research motivated the need for agents to explicitly reason about autonomy and delegation. Although there is a need for increased autonomy on the part of agents, there are times when autonomy is harmful. For instance, when agents are fully autonomous and their actions are un-interruptible. Humans in the loop of such agents may come to harm if they cannot interrupt actions of agents. Also, if agents have negative influence on one another, their independence may detract from harmonious interaction. When tasks among agents are coupled, cooperating agents need to take one another’s actions into account and not be self-interested. This workshop has contributed to understanding of social agent interactions. Jean-Claude Martin (2002, this issue) introduces TYCOON as a framework for the analysis of human verbal and non verbal behaviour. This system is used in making sense of multimodal human-computer interaction and human-human communication. This work is similar to digital Ethnography that allows study of intensely interactive collaboration at the level of both language and physical coordination in time and space (Hutchins, 1995). This is an ongoing project and agents could be endowed with more autonomy by spontaneously volunteering information to participants. Jean-Michel Hoc (2000, this issue) presents issues of dynamic allocation of activities in cooperation between human operators and autonomous machines in complex environments. He suggests that a different consideration of decomposition of the overall task into subtasks is needed for humans and automated agents. He gives examples from a series of studies on human-machine cooperation in air traffic control in order to illustrate his argument. Among his main points are (a) tasks be defined with intentions, (b) mutual monitoring of humans and agents is needed, and (c) knowledge and plans should be shared among humans and agents. Connection Science 2003 2 McCauley and Franklin (2002, this issue) approach present a real world massive MAS with approximately 350,000 individual agents that addresses a problem in the US Navy for assigning jobs to sailors at the end of each sailor's tour of duty. They have developed a cognitive agent capable of reasoning about autonomy of sailors and navy human detailers. They go on to discuss major issues regarding the design, interaction, and autonomy of the various agents involved. Castelfranchi and Falcone (2002 this issue) suggest relationships between trust and control in agents. They claim that the basic form of dyadic trust between two agents is the opposite of the notion of dyadic control between those agents. However, the more general notion of trust relies on control between agents. Several other nuances of interaction between control and trust are explained. Barber and MacMahon (2002, this issue) consider group formation that specifies and optimizes the allocation of decisionmaking and action-execution responsibilities for a set of goals among agents within the MAS. They present an analysis of space of decision-making and adaptations in decision-making. This work helps us reason about improvements in organizational capabilities for decision-making and suggests changes in the structure of the organization. Schillo (2002, this issue) explores the relationship between self-organization of multiagent systems and adjustable autonomy in intelligent agents. His analysis pivots on the notion of delegation in order to define organizational relationships. He distinguishes task delegation from social delegation. He draws several organizational models where with increased structure autonomy is diminished. Schreckenghost, Martin, Bonasso, Kortenkamp, Milam, and Thronesbery (2002, this issue) present another real world MAS from a NASA mission. This system supports collaboration among these heterogeneous agents while they operate remotely and communicate asynchronously with multiple humans in the control loop. They identify research issues in groups where members have non-overlapping roles and guided by high level plans. We have made a radical observation that in the space-based operations dynamically reconfiguration of team is not common. This author (Hexmoor 2002, this volume) presents a model that relates autonomy and power. He then goes on to discuss group effects that amplify the individual notions of power and autonomy. An algorithm for task allocation was discussed that showed a use of grouping agents into power groups. This work is step in the direction of defining autonomy and power that applies to a group, i.e., collective autonomy and collective power. Kar, Dutta and Sen (2002, this issue) illustrate effectiveness of probabilistic reciprocity for promoting cooperation among agents. This is an extension of their work on reciprocity between individual agents. In this extension, group members offer opinions about the balance of their past interactions with individuals in other groups. The group must decide based on collective opinions of its members. Group interactions are diminished when agents lie about their balance. However, in certain pattern of group selection lying is ineffective. O'Hare (2002, this issue) explores virtual agent communities and identifies issues that underpin social cohesion. Among other points, he points out the importance of awareness and presence in collaboration. Several domains are examined in robotics, mobile/wearable platforms, and tour guide avatars.