Learning Decision Trees for Action Selection in Soccer Agents

In highly-dynamic domains such as robotic soccer it is important for agents to take action rapidly, often in the order of a fraction of a second. This requires, a possible longer-term planning component notwithstanding, some form of reactive action selection mechanism. In this paper we report on results employing decision-tree learning to provide a ball-possessing soccer agent in the S IMULATION LEAGUE with such a mechanism. The approach has payed off in at least two ways. For one, the resulting decision tree applies to a much larger set of game situations than those previously reported and performs well in practice. For another, the learning method yielded a set of qualitative features to classify game situations, which are useful beyond reactive decision making.

[1]  Alexander Ferrein,et al.  Towards a League-Independent Qualitative Soccer Theory for RoboCup , 2005, RoboCup.

[2]  J. Ross Quinlan,et al.  Induction of Decision Trees , 1986, Machine Learning.

[3]  Peter Stone,et al.  Layered learning in multiagent systems - a winning approach to robotic soccer , 2000, Intelligent robotics and autonomous agents.

[4]  Tonya Lewis,et al.  Knowledge in Action , 1977 .

[5]  Gerhard Lakemeyer,et al.  On-Line Execution of cc-Golog Plans , 2001, IJCAI.

[6]  J. Ross Quinlan,et al.  C4.5: Programs for Machine Learning , 1992 .

[7]  Jürgen Schmidhuber,et al.  Learning Team Strategies: Soccer Case Studies , 1998, Machine Learning.

[8]  Mark Humphrys,et al.  Action Selection methods using Reinforcement Learning , 1996 .

[9]  Jelle R. Kok,et al.  The Incremental Development of a Synthetic Multi-Agent System: The UvA Trilearn 2001 Robotic Soccer Simulation Team , 2002 .

[10]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[11]  Martin A. Riedmiller,et al.  Karlsruhe Brainstormers - A Reinforcement Learning Approach to Robotic Soccer , 2000, RoboCup.

[12]  Veljko Milutinovic,et al.  Neural Networks: Concepts, Applications, and Implementations , 1991 .

[13]  Shimon Whiteson,et al.  Concurrent layered learning , 2003, AAMAS '03.

[14]  Long Ji Lin,et al.  Scaling Up Reinforcement Learning for Robot Control , 1993, International Conference on Machine Learning.

[15]  Hector J. Levesque,et al.  GOLOG: A Logic Programming Language for Dynamic Domains , 1997, J. Log. Program..

[16]  Massimo Lucchesi Coaching the 3-4-1-2 and 4-2-3-1 , 2002 .

[17]  Hitoshi Matsubara,et al.  Learning of Cooperative actions in multi-agent systems: a case study of pass play in Soccer , 2002 .

[18]  Craig Boutilier,et al.  Decision-Theoretic, High-Level Agent Programming in the Situation Calculus , 2000, AAAI/IAAI.

[19]  Ubbo Visser,et al.  Using Online Learning to Analyze the Opponent's Behavior , 2002, RoboCup.

[20]  Alexander Ferrein,et al.  Specifying multirobot coordination in ICPGolog from simulation towards real robots , 2003 .

[21]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[22]  John G. Gibbons Knowledge in Action , 2001 .