Toward a Reinforcement Learning Environment Toolbox for Intelligent Electric Motor Control

Electric motors are used in many applications, and their efficiency is strongly dependent on their control. Among others, linear feedback approaches or model predictive control methods are well known in the scientific literature and industrial practice. A novel approach is to use reinforcement learning (RL) to have an agent learn electric drive control from scratch merely by interacting with a suitable control environment. RL achieved remarkable results with superhuman performance in many games (e.g., Atari classics or Go) and also becomes more popular in control tasks, such as cart-pole or swinging pendulum benchmarks. In this work, the open-source Python package gym-electric-motor (GEM) is developed for ease of training of RL-agents for electric motor control. Furthermore, this package can be used to compare the trained agents with other state-of-the-art control approaches. It is based on the OpenAI Gym framework that provides a widely used interface for the evaluation of RL-agents. The package covers different dc and three-phase motor variants, as well as different power electronic converters and mechanical load models. Due to the modular setup of the proposed toolbox, additional motor, load, and power electronic devices can be easily extended in the future. Furthermore, different secondary effects, such as converter interlocking time or noise, are considered. An intelligent controller example based on the deep deterministic policy gradient algorithm that controls a series dc motor is presented and compared to a cascaded proportional-integral controller as a baseline for future research. Here, safety requirements are particularly highlighted as an important constraint for data-driven control algorithms applied to electric energy systems. Fellow researchers are encouraged to use the GEM framework in their RL investigations or contribute to the functional scope (e.g., further motor types) of the package.

[1]  R. S. Kanchan,et al.  Model-Based Predictive Control of Electric Drives , 2010 .

[2]  Yuval Tassa,et al.  Continuous control with deep reinforcement learning , 2015, ICLR.

[3]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[4]  Wulfram Gerstner,et al.  Reinforcement Learning Using a Continuous Time Actor-Critic Framework with Spiking Neurons , 2013, PLoS Comput. Biol..

[5]  Demis Hassabis,et al.  Mastering the game of Go without human knowledge , 2017, Nature.

[6]  Eyke Hüllermeier,et al.  A Reinforcement Learning Strategy for the Swing-Up of the Double Pendulum on a Cart , 2018 .

[7]  Wojciech Zaremba,et al.  OpenAI Gym , 2016, ArXiv.

[8]  J. Lambert Numerical Methods for Ordinary Differential Equations , 1991 .

[9]  Jakramate Bootkrajang,et al.  Least Square Reinforcement Learning for Solving Inverted Pendulum Problem , 2018, 2018 3rd International Conference on Computer and Communication Systems (ICCCS).

[10]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[11]  Daniel Gorges,et al.  Relations between Model Predictive Control and Reinforcement Learning , 2017 .

[12]  Alex Graves,et al.  Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.

[13]  Oliver Wallscheid,et al.  Controller Design for Electrical Drives by Deep Reinforcement Learning: A Proof of Concept , 2020, IEEE Transactions on Industrial Informatics.

[14]  Damien Ernst,et al.  Reinforcement Learning for Electric Power System Decision and Control: Past Considerations and Perspectives , 2017 .

[15]  Dierk Schröder,et al.  Elektrische Antriebe - Regelung von Antriebssystemen , 2001 .

[16]  John Chiasson,et al.  Modeling and High Performance Control of Electric Machines , 2005 .