An experts approach to strategy selection in multiagent meeting scheduling

In the multiagent meeting scheduling problem, agents negotiate with each other on behalf of their users to schedule meetings. While a number of negotiation approaches have been proposed for scheduling meetings, it is not well understood how agents can negotiate strategically in order to maximize their users’ utility. To negotiate strategically, agents need to learn to pick good strategies for negotiating with other agents. In this paper, we show how agents can learn online to negotiate strategically in order to better satisfy their users’ preferences. We outline the applicability of experts algorithms to the problem of learning to select negotiation strategies. In particular, we show how two different experts approaches, plays [3] and Exploration–Exploitation Experts (EEE) [10] can be adapted to the task. We show experimentally the effectiveness of our approach for learning to negotiate strategically.

[1]  Jeffrey S. Rosenschein,et al.  A Non-manipulable Meeting Scheduling System , 1994 .

[2]  Nimrod Megiddo,et al.  How to Combine Expert (and Novice) Advice when Actions Impact the Environment? , 2003, NIPS.

[3]  Manfred K. Warmuth,et al.  The Weighted Majority Algorithm , 1994, Inf. Comput..

[4]  Manuela Veloso,et al.  Opportunities for Learning in Multi-Agent Meeting Scheduling , 2004, AAAI Technical Report.

[5]  Toramatsu Shintani,et al.  Multiple negotiations among agents for a distributed meeting scheduler , 2000, Proceedings Fourth International Conference on MultiAgent Systems.

[6]  Edmund H. Durfee,et al.  A Formal Study of Distributed Meeting Scheduling , 1998 .

[7]  Nimrod Megiddo,et al.  Combining expert advice in reactive environments , 2006, JACM.

[8]  Katia Sycara,et al.  Multi-Agent Meeting Scheduling: Preliminary Experimental Results , 1996 .

[9]  Nicholas R. Jennings,et al.  Agent-based meeting scheduling: a design and implementation , 1995 .

[10]  Nimrod Megiddo,et al.  Exploration-Exploitation Tradeoffs for Experts Algorithms in Reactive Environments , 2004, NIPS.

[11]  Yoram Singer,et al.  Using and combining predictors that specialize , 1997, STOC '97.

[12]  Manuela M. Veloso,et al.  Bumping Strategies for the Private Incremental Multiagent Agreement Problem , 2005, AAAI Spring Symposium: Persistent Assistants: Living and Working with AI.

[13]  Michael H. Bowling,et al.  Convergence and No-Regret in Multiagent Learning , 2004, NIPS.

[14]  Brett Browning,et al.  Plays as Effective Multiagent Plans Enabling Opponent-Adaptive Play Selection , 2004, ICAPS.

[15]  FariasDaniela Pucci De,et al.  Combining expert advice in reactive environments , 2006 .

[16]  Manuela M. Veloso,et al.  Learning to Select Negotiation Strategies in Multi-agent Meeting Scheduling , 2005, EPIA.

[17]  Manuela M. Veloso,et al.  Learning dynamic preferences in multi-agent meeting scheduling , 2005, IEEE/WIC/ACM International Conference on Intelligent Agent Technology.

[18]  Nicolò Cesa-Bianchi,et al.  Gambling in a rigged casino: The adversarial multi-armed bandit problem , 1995, Proceedings of IEEE 36th Annual Foundations of Computer Science.