The scoring procedure for a competitive research competition influences the usefulness of the results in real-world applications

In the 2006 RoboCup Virtual Rescue competition, teams from different research labs developed methods for controlling teams of mobile robots in a simulated urban search and rescue scenario. This paper reviews the strategies and scores from the top six competitors. The scoring procedure used in this inaugural competition rewards participants for the number of victims found, the amount of area explored in the environment, the quality of the maps created by the robot teams and penalties participants for colliding with a victim or relying on human operators. The analysis of the strategies and scores suggests that the scoring procedure may lead teams to adopt strategies that are not consistent with the needs of a real search and rescue scenario. Individual robot contributions to the system were reviewed to account for the costs associated with adding a robot to the environment, indicating that value added per robot is an important measure that is overlooked. The analysis of the impact of human operator penalties on scoring revealed an overemphasis on fully autonomous robotic systems. The analysis also revealed substantial performance variation, depending on the behavior that was being rewarded, which may indicate a lack of focus for evaluative performance measures of robotic urban search and rescue systems. The competition has the potential to provide influential research in this area if a proper scoring procedure that reflects actual research needs is implemented. In order to ensure that research gains made as a result of the competition process are useful to the application community, it is essential that the rules be tuned to the application needs. It is likely that, as competitions and games are becoming a growing part of the research community, this sensitivity is managed along with the other political, social and interactive demands involved in setting rules for research competitions.