Learning in dynamic noncooperative multiagent systems

Dynamic noncooperative multiagent systems are systems where self-interested agents interact with each other and their interactions change over time. We investigate the problem of learning and decision making in such systems. We model the systems in the framework of general-sum stochastic games with incomplete information. We design a multiagent Q-learning method, and prove its convergence in the framework of stochastic games. The standard Q-learning method, a reinforcement learning method, was originally designed for single-agent systems. Its convergence was proved for Markov decision processes, which are single-agent problems. Our extension broadens the framework of reinforcement learning, and helps to establish the theoretical foundation for applying it to multiagent systems. We prove that our learning algorithm converges to a Nash equilibrium under certain restrictions on the game structure during learning. In our simulations of a grid-world game, the restrictions are relaxed and our learning method still converges. In addition to model-free reinforcement learning, we have also studied model-based learning where agents form models of others and update their models through observations of the environment. We find that agents' mutual learning can lead to a conjectural equilibrium, where the agents' models of the others are fulfilled, and each agent behaves optimally given its expectation. Such an equilibrium state may be suboptimal. The agents may be worse off than had they not attempted to learn the models of others at all. This poses a pitfall for multiagent learning. We also analyzed the problem of recursive modeling in a dynamic game framework. This differs from previous work which studied recursive modeling in static or repeated games. We implement various levels of recursive model in a simulated double auction market. Our experiments show that performance of an agent can be quite sensitive to its assumptions about the policies of other agents, and when there is substantial uncertainty about the level of sophistication of other agents, reducing the level of recursion might be the best policy.