Learning to run a Power Network Challenge: a Retrospective Analysis

Power networks, responsible for transporting electricity across large geographical regions, are complex infrastructures on which modern life critically depend. Variations in demand and production profiles, with increasing renewable energy integration, as well as the high voltage network technology, constitute a real challenge for human operators when optimizing electricity transportation while avoiding blackouts. Motivated to investigate the potential of Artificial Intelligence methods in enabling adaptability in power network operation, we have designed a L2RPN challenge to encourage the development of reinforcement learning solutions to key problems present in the next-generation power networks. The NeurIPS 2020 competition was well received by the international community attracting over 300 participants worldwide. The main contribution of this challenge is our proposed comprehensive ’Grid2Op’ framework, and associated benchmark, which plays realistic sequential network operations scenarios. The Grid2Op framework, which is open-source and easily re-usable, allows users to define new environments with its companion GridAlive ecosystem. Grid2Op relies on existing non-linear physical power network simulators and let users create a series of perturbations and challenges that are representative of two important problems: a) the uncertainty resulting from the increased use of unpredictable renewable energy sources, and b) the robustness required with contingent line disconnections. In this paper, we give the highlights of the NeurIPS 2020 competition. We present the benchmark suite and analyse the winning solutions, including one super-human performance demonstration. We propose our organizational insights for a successful competition and conclude on open research avenues. Given the challenge success, we expect our work will foster research to create more sustainable solutions for power network operations.

[1]  Nir Levine,et al.  Challenges of real-world reinforcement learning: definitions, benchmarks and analysis , 2021, Machine Learning.

[2]  Koushil Sreenath,et al.  Reinforcement Learning for Robust Parameterized Locomotion Control of Bipedal Robots , 2021, 2021 IEEE International Conference on Robotics and Automation (ICRA).

[3]  D. Ernst,et al.  Gym-ANM: Reinforcement Learning Environments for Active Network Management Tasks in Electricity Distribution Systems , 2021, ArXiv.

[4]  Antoine Marot,et al.  Adversarial Training for a Continuous Robustness Control Problem in Power Systems , 2020, 2021 IEEE Madrid PowerTech.

[5]  Antoine Marot,et al.  Exploring grid topology reconfiguration using a simple deep reinforcement learning approach , 2020, 2021 IEEE Madrid PowerTech.

[6]  Deunsol Yoon,et al.  Winning the L2RPN Challenge: Power Grid Management via Semi-Markov Afterstate Actor-Critic , 2021, ICLR.

[7]  In-So Kweon,et al.  An Efficient Asynchronous Method for Integrating Evolutionary and Gradient-based Policy Search , 2020, NeurIPS.

[8]  Erik Nygren,et al.  Flatland-RL : Multi-Agent Reinforcement Learning on Trains , 2020, ArXiv.

[9]  Antoine Marot,et al.  Towards an AI assistant for human grid operators , 2020, ArXiv.

[10]  Gonzague Henri,et al.  pymgrid: An Open-Source Python Microgrid Simulator for Applied Artificial Intelligence Research , 2020, ArXiv.

[11]  Eyke Hüllermeier,et al.  Towards a Scalable and Flexible Simulation and Testing Environment Toolbox for Intelligent Microgrid Control , 2020, ArXiv.

[12]  Daniel Guo,et al.  Agent57: Outperforming the Atari Human Benchmark , 2020, ICML.

[13]  Aidan O'Sullivan,et al.  Reinforcement Learning for Electricity Network Operation , 2020, ArXiv.

[14]  Isabelle Guyon,et al.  Learning to run a power network challenge for training topology controllers , 2019, Electric Power Systems Research.

[15]  Zhiwei Wang,et al.  AI-Based Autonomous Line Flow Control via Topology Adjustment for Maximizing Time-Series ATCs , 2019, 2020 IEEE Power & Energy Society General Meeting (PESGM).

[16]  Renke Huang,et al.  Adaptive Power System Emergency Control Using Deep Reinforcement Learning , 2019, IEEE Transactions on Smart Grid.

[17]  Gabriel Dulac-Arnold,et al.  L2RPN: Learning to Run a Power Network in a Sustainable World NeurIPS2020 challenge design , 2020 .

[18]  Hao Tian,et al.  Efficient and Robust Reinforcement Learning with Uncertainty-based Value Expansion , 2019, ArXiv.

[19]  Wojciech M. Czarnecki,et al.  Grandmaster level in StarCraft II using multi-agent reinforcement learning , 2019, Nature.

[20]  Mattia Marinelli,et al.  The Future Role of Human Operators in Highly Automated Electric Power Systems , 2019 .

[21]  M. Ferris,et al.  The Power Grid Library for Benchmarking AC Optimal Power Flow Algorithms , 2019, ArXiv.

[22]  N. E. Toklu,et al.  Artificial Intelligence for Prosthetics - challenge solutions , 2019, The NeurIPS '18 Competition.

[23]  Dario Amodei,et al.  Benchmarking Safe Exploration in Deep Reinforcement Learning , 2019 .

[24]  Patrick Panciatici,et al.  Expert System for topological remedial action discovery in smart grids , 2018 .

[25]  Florian Schäfer,et al.  Pandapower—An Open-Source Python Tool for Convenient Modeling, Analysis, and Optimization of Electric Power Systems , 2017, IEEE Transactions on Power Systems.

[26]  Bri-Mathias Hodge,et al.  An Extended IEEE 118-Bus Test System With High Renewable Penetration , 2018, IEEE Transactions on Power Systems.

[27]  Demis Hassabis,et al.  Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm , 2017, ArXiv.

[28]  Alec Radford,et al.  Proximal Policy Optimization Algorithms , 2017, ArXiv.

[29]  Damien Ernst,et al.  Reinforcement Learning for Electric Power System Decision and Control: Past Considerations and Perspectives , 2017 .

[30]  Tuomas Sandholm,et al.  Safe and Nested Subgame Solving for Imperfect-Information Games , 2017, NIPS.

[31]  Tom Schaul,et al.  Dueling Network Architectures for Deep Reinforcement Learning , 2015, ICML.

[32]  Fei-Fei Li,et al.  ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[33]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.