Deep Reinforcement Learning Enabled Physical-Model-Free Two-Timescale Voltage Control Method for Active Distribution Systems

Active distribution networks are being challenged by frequent and rapid voltage violations due to renewable energy integration. Conventional model-based voltage control methods rely on accurate parameters of the distribution networks, which are difficult to achieve in practice. This paper proposes a novel physical-model-free two-timescale voltage control framework for active distribution systems. To achieve fast control of PV inverters, the whole network is first partitioned into several sub-networks using voltage-reactive power sensitivity. Then, the scheduling of PV inverters in the multiple sub-networks is formulated as Markov games and solved by a multi-agent soft actor-critic (MASAC) algorithm, where each sub-network is modeled as an intelligent agent. All agents are trained in a centralized manner to learn a coordinated strategy while being executed based on only local information for fast response. For the slower time-scale control, OLTCs and switched capacitors are coordinated by a single agent-based SAC algorithm using the global information with considering control behaviors of the inverters. Particularly, the two-level agents are trained concurrently with information exchange according to the reward signal calculated from the data-driven surrogate model. Comparative tests with different benchmark methods on IEEE 33- and 123-bus systems and 342-node low voltage distribution system demonstrate that the proposed method can effectively mitigate the fast voltage violations and achieve systematical coordination of different voltage regulation assets without the knowledge of accurate system model.

[1]  Fei Sha,et al.  Actor-Attention-Critic for Multi-Agent Reinforcement Learning , 2018, ICML.

[2]  Wenyu Wang,et al.  Maximum Marginal Likelihood Estimation of Phase Connections in Power Distribution Systems , 2019, 2021 IEEE Power & Energy Society General Meeting (PESGM).

[3]  Dijun Luo,et al.  Efficient Fully-Offline Meta-Reinforcement Learning via Distance Metric Learning and Behavior Regularization , 2020, ArXiv.

[4]  Wei Shi,et al.  Distributed Voltage Control in Distribution Networks: Online and Robust Implementations , 2018, IEEE Transactions on Smart Grid.

[5]  Hao Chen,et al.  Inverter-Less Hybrid Voltage/Var Control for Distribution Circuits With Photovoltaic Generators , 2014, IEEE Transactions on Smart Grid.

[6]  Carl E. Rasmussen,et al.  Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.

[7]  Hao Jan Liu,et al.  Fast Local Voltage Control Under Limited Reactive Power: Optimality and Stability Analysis , 2015, IEEE Transactions on Power Systems.

[8]  Wei Wang,et al.  Consensus Multi-Agent Reinforcement Learning for Volt-VAR Control in Power Distribution Networks , 2020, IEEE Transactions on Smart Grid.

[9]  Brett Robbins,et al.  Optimal Tap Setting of Voltage Regulation Transformers in Unbalanced Distribution Systems , 2016, IEEE Transactions on Power Systems.

[10]  Wei Wang,et al.  Safe Off-Policy Deep Reinforcement Learning Algorithm for Volt-VAR Control in Power Distribution Systems , 2020, IEEE Transactions on Smart Grid.

[11]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[12]  Wei Wang,et al.  Dynamic Distribution Network Reconfiguration Using Reinforcement Learning , 2019, 2019 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm).

[13]  Bin Zhang,et al.  Reinforcement Learning and Its Applications in Modern Power and Energy Systems: A Review , 2020, Journal of Modern Power Systems and Clean Energy.

[14]  Xianzhuo Sun,et al.  Two-Stage Volt/Var Control in Active Distribution Networks With Multi-Agent Deep Reinforcement Learning Method , 2021, IEEE Transactions on Smart Grid.

[15]  Gang Wang,et al.  Ergodic Energy Management Leveraging Resource Variability in Distribution Grids , 2015, IEEE Transactions on Power Systems.

[16]  Qi Huang,et al.  A Multi-Agent Deep Reinforcement Learning Based Voltage Regulation Using Coordinated PV Inverters , 2020, IEEE Transactions on Power Systems.

[17]  David J. Hill,et al.  Multi-Timescale Coordinated Voltage/Var Control of High Renewable-Penetrated Distribution Systems , 2017, IEEE Transactions on Power Systems.

[18]  Frede Blaabjerg,et al.  A Two-Stage Robust Optimization for Centralized-Optimal Dispatch of Photovoltaic Inverters in Active Distribution Networks , 2017, IEEE Transactions on Sustainable Energy.

[19]  Brandon Foggo,et al.  Improving Supervised Phase Identification Through the Theory of Information Losses , 2019, IEEE Transactions on Smart Grid.

[20]  Jayashri Ravishankar,et al.  Three-Stage Robust Inverter-Based Voltage/Var Control for Distribution Networks With High-Level PV , 2019, IEEE Transactions on Smart Grid.

[21]  Yi Wu,et al.  Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments , 2017, NIPS.

[22]  Georgios B. Giannakis,et al.  Two-Timescale Voltage Control in Distribution Grids Using Deep Reinforcement Learning , 2019, IEEE Transactions on Smart Grid.

[23]  Ahmad Zahedi,et al.  Review of control strategies for voltage regulation of the smart distribution network with high penetration of renewable distributed generation , 2016 .

[24]  Di Cao,et al.  Attention Enabled Multi-Agent DRL for Decentralized Volt-VAR Control of Active Distribution System Using PV Inverters and SVCs , 2021, IEEE Transactions on Sustainable Energy.

[25]  Hanchen Xu,et al.  Data-Driven Voltage Regulation in Radial Power Distribution Systems , 2019, IEEE Transactions on Power Systems.

[26]  Andrey Bernstein,et al.  Network-Cognizant Voltage Droop Control for Distribution Grids , 2017, IEEE Transactions on Power Systems.

[27]  Sergey Levine,et al.  Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor , 2018, ICML.

[28]  Haotian Liu,et al.  Two-Stage Deep Reinforcement Learning for Inverter-Based Volt-VAR Control in Active Distribution Networks , 2020, IEEE Transactions on Smart Grid.

[29]  Yan Xu,et al.  Multi-Objective Adaptive Robust Voltage/VAR Control for High-PV Penetrated Distribution Networks , 2020, IEEE Transactions on Smart Grid.

[30]  Yingchen Zhang,et al.  Deep Reinforcement Learning Based Volt-VAR Optimization in Smart Distribution Systems , 2021, IEEE Transactions on Smart Grid.

[31]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[32]  Thomas de Quincey [C] , 2000, The Works of Thomas De Quincey, Vol. 1: Writings, 1799–1820.

[33]  Nikolaos Gatsis,et al.  Convergence of the Z-Bus method and existence of unique solution in single-phase distribution load-flow , 2016, 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP).

[34]  Saifur Rahman,et al.  Distribution Voltage Regulation Through Active Power Curtailment With PV Inverters and Solar Generation Forecasts , 2017, IEEE Transactions on Sustainable Energy.

[35]  Tao Liu,et al.  Distributed Coordinated Reactive Power Control for Voltage Regulation in Distribution Networks , 2021, IEEE Transactions on Smart Grid.

[36]  Yan Xu,et al.  Hierarchically-Coordinated Voltage/VAR Control of Distribution Networks Using PV Inverters , 2020, IEEE Transactions on Smart Grid.

[37]  Zaijun Wu,et al.  Distributed Adaptive Robust Voltage/VAR Control With Network Partition in Active Distribution Networks , 2020, IEEE Transactions on Smart Grid.