Robust Regional Coordination of Inverter-Based Volt/Var Control via Multi-Agent Deep Reinforcement Learning

Inverter-based Volt/Var control (VVC) methods can effectively address issues of voltage deviations and constraint violations, as well as reduce network power losses in active distribution networks. Conventional VVC mainly relies on rule-based, mathematical, or heuristic methods. However, these methods can be inefficient or even infeasible for large-scale systems, especially when operational uncertainties are considered at both spatial and temporal scales. To this end, firstly, this paper formulates a multi-region coordinated VVC optimization problem to simultaneously minimize bus voltage deviations and network power losses. Furthermore, with the problem being formulated as a partially observable Markov Game, a multi-agent deep deterministic policy gradient (MADDPG) algorithm is modified to solve this problem. Each region, which is a sub-network with a control center, is modeled as an agent and learns to optimize inverter reactive power output setpoints through exploration in virtual environment and improved neural network training procedure. Spatial and temporal uncertainties of photovoltaic power generation and loads are modeled by stochastic programming as scenarios in the MADDPG algorithm. This algorithm can address the uncertainties to guarantee solution robustness. Thus, this paper eventually proposes a robust regionally coordinated VVC method, and numerical simulations indicate high computing efficiency and robust optimum.