Distributed play-based approaches have been proposed as an effective means of switching strategies during the course of timed, zero-sum games, such as robot soccer. However, play-based approaches have not yet been rigorously evaluated in a full robot soccer scenario. In this paper, we perform an extensive empirical analysis of play effectiveness with teams of robots. We show that different plays have a significant effect on opponent performance in real robot soccer games. In our analysis we further realized the problem of distributed play recognition: classifying the strategy being played by the opponent team. Play recognition in real robot soccer is a particularly challenging problem because our observations are only “incidental” - that is, the primary task of our team is to play soccer, not to explicitly observe members of the other team. Despite these challenges, we achieve high classification accuracy in the robot soccer domain. To achieve high accuracy, our team maintains a history of joint observations including team positions, opponent positions, and ball positions and utilizes hidden Markov models to recognize opponent plays.
[1]
Manuela M. Veloso,et al.
Unknown Rewards in Finite-Horizon Domains
,
2008,
AAAI.
[2]
Lawrence R. Rabiner,et al.
A tutorial on hidden Markov models and selected applications in speech recognition
,
1989,
Proc. IEEE.
[3]
Manuela M. Veloso,et al.
Thresholded Rewards: Acting Optimally in Timed, Zero-Sum Games
,
2007,
AAAI.
[4]
Brett Browning,et al.
Plays as Team Plans for Coordination and Adaptation
,
2003,
RoboCup.
[5]
Lynne E. Parker,et al.
Cooperative multi-robot observation of multiple moving targets
,
1997,
Proceedings of International Conference on Robotics and Automation.
[6]
Manuela Veloso,et al.
Distributed, Play-Based Role Assignment for Robot Teams in Dynamic Environments
,
2006,
DARS.
[7]
Manuela M. Veloso,et al.
Real-time object detection using segmented and grayscale images
,
2006,
Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..