Current-generation online games typically incorporate a “computer” opponent to train new players to compete against human opponents. The quality of this training depends to a large degree on how similar the computer’s play is to that of an experienced human player. For instance, inhuman weaknesses in computer play encourage new players to develop tactics, prediction rules and playing styles that will be ineffective against people. Game designers often compensate for weaknesses in the computer’s play by providing it with superhuman capabilities such as omniscience. However, such abilities render otherwise important tactics ineffective and thus discourage players from developing useful skills. These differences are especially pronounced in “real-time strategy” games such as Starcraft where tactics are often designed to take advantage of specific human limitations. An informal survey of experienced Starcraft players reveals numerous play-critical differences between human and computer performance. In this paper, we identify several of these differences, and then discuss a prototyping tool for constructing appropriately humanlike software agents.
[1]
Lawrence Birnbaum,et al.
Simulating human performance in complex, dynamic environments
,
1998
.
[2]
P. Fitts,et al.
INFORMATION CAPACITY OF DISCRETE MOTOR RESPONSES.
,
1964,
Journal of experimental psychology.
[3]
Robert James Firby,et al.
Adaptive execution in complex dynamic worlds
,
1989
.
[4]
Michael Freed,et al.
Managing Multiple Tasks in Complex, Dynamic Environments
,
1998,
AAAI/IAAI.
[5]
Michael Freed,et al.
Managing Decision Resources in Plan Execution
,
1997,
IJCAI.
[6]
Reid G. Simmons,et al.
Structured control for autonomous robots
,
1994,
IEEE Trans. Robotics Autom..