Adaptive AI for Fighting Games

Traditionally, AI research for games has focused on developing static strategies—fixed maps from the game state to a set of actions—which maximize the probability of victory. This works well for discovering facts about the game itself, and can be very successfully applied to combinatorial games like chess and checkers, where the personality and play style of the opponent takes a backseat to the mathematical problem of the game. In addition, this is the most widely used sort of AI among commercial video games today [1]. While these algorithms can often fight effectively, they tend to become repetitive and transparent to the players of the game, because their strategies are fixed. Even the most advanced AI algorithms for complex games often have a hole in their programming, a simple strategy which can be repeated over and over to remove the challenge of the AI opponent. Because their algorithms are static and inflexible, these AIs are incapable of adapting to these strategies and restoring the balance of the game.