Coordination of Electric Vehicle Charging Through Multiagent Reinforcement Learning

The number of Electric Vehicle (EV) owners is expected to significantly increase in the near future, since EVs are regarded as valuable assets both for transportation and energy storage purposes. However, recharging a large fleet of EVs during peak hours may overload transformers in the distribution grid. Although several methods have been proposed to flatten peak-hour loads and recharge EVs as fairly as possible in the available time, these typically focus either on a single type of tariff or on making strong assumptions regarding the distribution grid. In this article, we propose the MultiAgent Selfish-COllaborative architecture (MASCO), a Multiagent Multiobjective Reinforcement Learning architecture that aims at simultaneously minimizing energy costs and avoiding transformer overloads, while allowing EV recharging. MASCO makes minimal assumptions regarding the distribution grid, works under any type of tariff, and can be configured to follow consumer preferences. We perform experiments with real energy prices, and empirically show that MASCO succeeds in balancing energy costs and transformer load.

[1]  Vinny Cahill,et al.  Distributed W-Learning: Multi-Policy Optimization in Self-Organizing Systems , 2009, 2009 Third IEEE International Conference on Self-Adaptive and Self-Organizing Systems.

[2]  Zechun Hu,et al.  Distributed Coordination of EV Charging With Renewable Energy in a Microgrid of Buildings , 2018, IEEE Transactions on Smart Grid.

[3]  Felipe Leno da Silva,et al.  Simultaneously Learning and Advising in Multiagent Reinforcement Learning , 2017, AAMAS.

[4]  Vaneet Aggarwal,et al.  Control of Charging of Electric Vehicles Through Menu-Based Pricing , 2016, IEEE Transactions on Smart Grid.

[5]  Canbing Li,et al.  An Optimized EV Charging Model Considering TOU Price and SOC Curve , 2012, IEEE Transactions on Smart Grid.

[6]  R. Stephenson A and V , 1962, The British journal of ophthalmology.

[7]  A. Messac,et al.  Aggregate Objective Functions and Pareto Frontiers: Required Relationships and Practical Implications , 2000 .

[8]  Saifur Rahman,et al.  Impact of TOU rates on distribution load shapes in a smart grid with PHEV penetration , 2010, IEEE PES T&D 2010.

[9]  Francisco Herrera,et al.  A Survey of Discretization Techniques: Taxonomy and Empirical Analysis in Supervised Learning , 2013, IEEE Transactions on Knowledge and Data Engineering.

[10]  Siobhán Clarke,et al.  Accelerating Learning in multi-objective systems through Transfer Learning , 2014, 2014 International Joint Conference on Neural Networks (IJCNN).

[11]  N. H. C. Yung,et al.  A Multiple-Goal Reinforcement Learning Method for Complex Vehicle Overtaking Maneuvers , 2011, IEEE Transactions on Intelligent Transportation Systems.

[12]  Chresten Træholt,et al.  Driving Pattern Analysis for Electric Vehicle (EV) Grid Integration Study , 2010, 2010 IEEE PES Innovative Smart Grid Technologies Conference Europe (ISGT Europe).

[13]  Peter Dayan,et al.  Q-learning , 1992, Machine Learning.

[14]  E. A. B. Bueno,et al.  Evaluating the effect of the white tariff on a distribution expansion project in Brazil , 2013, 2013 IEEE PES Conference on Innovative Smart Grid Technologies (ISGT Latin America).

[15]  Mohamed A. Khamis,et al.  Adaptive multi-objective reinforcement learning with hybrid exploration for traffic signal control based on cooperative multi-agent framework , 2014, Eng. Appl. Artif. Intell..

[16]  Bart De Schutter,et al.  A Comprehensive Survey of Multiagent Reinforcement Learning , 2008, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[17]  Robert Givan,et al.  Relational Reinforcement Learning: An Overview , 2004, ICML 2004.

[18]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[19]  R. W. Uluski,et al.  The role of Advanced Distribution Automation in the Smart Grid , 2010, IEEE PES General Meeting.

[20]  Kay W. Axhausen,et al.  Plug-in hybrid electric vehicles and smart grids: Investigations based on a microsimulation , 2013 .

[21]  Shimon Whiteson,et al.  A Survey of Multi-Objective Sequential Decision-Making , 2013, J. Artif. Intell. Res..

[22]  Tom Lenaerts,et al.  Dynamic Weights in Multi-Objective Deep Reinforcement Learning , 2018, ICML.

[23]  Matthieu Geist,et al.  Algorithmic Survey of Parametric Value Function Approximation , 2013, IEEE Transactions on Neural Networks and Learning Systems.

[24]  Felipe Leno da Silva,et al.  A Survey on Transfer Learning for Multiagent Reinforcement Learning Systems , 2019, J. Artif. Intell. Res..

[25]  J. Driesen,et al.  The Impact of Charging Plug-In Hybrid Electric Vehicles on a Residential Distribution Grid , 2010, IEEE Transactions on Power Systems.

[26]  Mark Humphreys,et al.  Action selection methods using reinforcement learning , 1997 .

[27]  Sascha Ossowski,et al.  A proportional share allocation mechanism for coordination of plug-in electric vehiclecharging , 2013, Eng. Appl. Artif. Intell..

[28]  Michael L. Littman,et al.  Markov Games as a Framework for Multi-Agent Reinforcement Learning , 1994, ICML.

[29]  Filipe Joel Soares,et al.  Integration of Electric Vehicles in the Electric Power System , 2011, Proceedings of the IEEE.

[30]  Nikos D. Hatziargyriou,et al.  A Multi-Agent System for Controlled Charging of a Large Population of Electric Vehicles , 2013, IEEE Transactions on Power Systems.

[31]  Chong Cao,et al.  Generalized Nash equilibrium problem based electric vehicle charging management in distribution networks , 2018 .

[32]  Vinny Cahill,et al.  Multi-agent residential demand response based on load forecasting , 2013, 2013 1st IEEE Conference on Technologies for Sustainability (SusTech).

[33]  Lars Nordström,et al.  A multi-agent system for distribution grid congestion management with electric vehicles , 2015, Eng. Appl. Artif. Intell..