Given a large-scale team composed of intelligent members that are following derived heuristics, the performance of the team is often suboptimal. Providing more accurate heuristics may be infeasible or even impossible. We propose an approach that incoporates a human advisor interacting with a multiagent team in which the advice that the human gives is a component of the heuristic that each agent already uses. This approach allows us to have a clear way of adjusting the level of the team’s autonomy, addresses the issue of who on the team will be affected by the advice, and also factors in advice immediately (while the team is still performing). We looked back at data from previous experiments with an advisor (fire chief) in a disaster rescue simulation. Then we applied our approach to a domain where robots maintain a room full of sensors. We studied different advisors and varied how much the team members listened. Our initial results show that the problem of how to have a team of multiagents interpret advice from a human turned out to be much harder than we first thought. Our hypothesis is that human advice will be of assistance when the human can provide strategic advice that was previously unknown to the agents. Yet the the strategic advice must still be given in a manner in which the agents can understand through their limited methods of communication.
[1]
Karen L. Myers.
Advisable Planning Systems
,
1996
.
[2]
Milind Tambe,et al.
Toward Team-Oriented Programming
,
1999,
ATAL.
[3]
Manuela M. Veloso,et al.
Coaching a simulated soccer team by opponent model recognition
,
2001,
AGENTS '01.
[4]
Gil A. Tidhar.
Team-Oriented Programming: Preliminary Report
,
1993
.
[5]
Jude W. Shavlik,et al.
Creating Advice-Taking Reinforcement Learners
,
1998,
Machine Learning.
[6]
Milind Tambe,et al.
The Communicative Multiagent Team Decision Problem: Analyzing Teamwork Theories and Models
,
2011,
J. Artif. Intell. Res..
[7]
Milind Tambe,et al.
A prototype infrastructure for distributed robot-agent-person teams
,
2003,
AAMAS '03.