Agents in tank battle simulations

Networks of computers can be used to produce a digital virtual environment (DVE) where multiple participants can interact. This technology is extremely attractive to the military to provide training simulations. By the use of mock-up vehicles and high-fidelity visual systems, trainees get a window onto a virtual world populated by simulated vehicles interacting over a realistic terrain surface. Some of these vehicles are controlled by human trainees, others by computers. It is essential that trainees find the behavior of the computer-controlled vehicles realistic. Currently most computer forces are semiautomated using finite state machines or rule bases to govern their behavior, but requiring constant supervision by a human controller [1]. However, the vehicles can become increasingly autonomous as AI and agent techniques develop, thus reducing the number of human controllers as well as the hefty manpower bill associated with running big training simulations [2, 3]. We are concentrating on developing agents to control tanks within ground battle simulations. Here, tactical behavior is governed by two main factors—the terrain over which the tanks are moving and their beliefs about the enemy. In trying to produce battlefield behavior that mimics a human tactician, it is advantageous to model the command structure used by the army. This helps with the gathering of knowledge from subject matter experts and enables a hierarchical decomposition of the problems. The figure appearing in this sidebar shows the hierarchy of agents—highlevel commanders are given objectives that are used to produce lower-level objectives for their subordinates. Information flows both up and down the command chain and agents need to cooperate with their peers to achieve the overall goal set by their commander. This natural decomposition of the problem allows higherlevel agents to work on long-term plans while the individual tank agents carry out orders designed to achieve more immediate objectives.