Game-Theoretical Methods in Control of Engineering Systems: An Introduction to the Special Issue

There has been increasing interest in the control community in studying large-scale distributed systems, and numerous techniques have been developed to address the main challenges in these problems. One way to approach these types of problems is to use a multiagent systems framework, which can be cast in game-theoretical terms. Game theory has traditionally been used to describe the behavior of decision makers restricted to the available information from either some or all agents (players). From this perspective, game theory shares some common points with control systems problems, in particular those with distributed topologies, where the interconnection of different elements (agents) leads to a global behavior that depends on the local interaction of these agents. Applications such as games played on networks, the dynamics of consensus/synchronization, and energy and transportation networks combine many techniques of game theory, such as noncooperative, cooperative, dynamical, mean-field theory, and evolutionary games. Because of this rising recent interest, this special issue is devoted to collecting different techniques and points of view in the interaction of game-theoretical methods and automatic control, with the aim of closing the gap between the wide background in game theory and related disciplines and their application to solving engineering problems of diverse nature.