Large distributed control systems typically can be modeled by a hierarchical structure with two physical layers: console level computers (CLCs) layer and front end computers (FECs) layer. The control system of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) consists of more than 500 FECs, each acting as a server providing services to a large number of clients. Hence the interactions between the server and its clients become crucial to the overall system performance. There are different scenarios of the interactions. For instance, there are cases where the server has a limited processing ability and is queried by a large number of clients. Such cases can put a bottleneck in the system, as heavy traffic can slow down or even crash a system, making it momentarily unresponsive. Also, there are cases where the server has adequate ability to process all the traffic from its clients. We pursue different goals in those cases. For the first case, we would like to manage clients’ activities so that their requests are processed by the server as much as possible and the server remains operational. For the second case, we would like to explore an operation point at which the server’s resources get utilized efficiently. Moreover, we consider a real-world time constraint to the above case. The time constraint states that clients expect the responses from the server within a time window. In this work, we analyze those cases from a game theory perspective. We model the underlying interactions as a repeated game between clients, which is carried out in discrete time slots. For clients’ activity management, we apply a reinforcement learning procedure as a baseline to regulate clients’ behaviors. Then we propose a memory scheme to improve its performance. Next, depending on different scenarios, we design corresponding reward functions to stimulate clients in a proper way so that they can learn to optimize different goals. Through extensive simulations, we show that first, the memory structure improves the learning ability of the baseline procedure significantly. Second, by applying appropriate reward functions, clients’ activities can be effectively managed to achieve different optimization goals.
[1]
Peter Secretan.
Learning
,
1965,
Mental Health.
[2]
Peter Dayan,et al.
Q-learning
,
1992,
Machine Learning.
[3]
Michael H. Bowling,et al.
Convergence and No-Regret in Multiagent Learning
,
2004,
NIPS.
[4]
Yishay Mansour,et al.
Nash Convergence of Gradient Dynamics in General-Sum Games
,
2000,
UAI.
[5]
Thomas G. Robertazzi.
Networks and grids - technology and theory
,
2007,
Information technology.
[6]
T. Clifford,et al.
RHIC control system
,
2003
.
[7]
H. Peyton Young,et al.
Learning, hypothesis testing, and Nash equilibrium
,
2003,
Games Econ. Behav..
[8]
Jing Chen,et al.
A Success-History Based Learning Procedure to Optimize Server Throughput in Large Distributed Control Systems
,
2018
.
[9]
S. Hart,et al.
A Reinforcement Procedure Leading to Correlated Equilibrium
,
2001
.
[10]
Richard S. Sutton,et al.
Reinforcement Learning: An Introduction
,
1998,
IEEE Trans. Neural Networks.
[11]
D. Fudenberg,et al.
Conditional Universal Consistency
,
1999
.
[12]
R. Vohra,et al.
Calibrated Learning and Correlated Equilibrium
,
1996
.
[13]
Manuela M. Veloso,et al.
Multiagent learning using a variable learning rate
,
2002,
Artif. Intell..
[14]
Hado van Hasselt,et al.
Double Q-learning
,
2010,
NIPS.
[15]
S. Hart,et al.
A simple adaptive procedure leading to correlated equilibrium
,
2000
.
[16]
Martin Zinkevich,et al.
Online Convex Programming and Generalized Infinitesimal Gradient Ascent
,
2003,
ICML.