A unified decision making framework for supply and demand management in microgrid networks

This paper considers two important problems - on the supply-side and demand-side respectively and studies both in a unified framework. On the supply side, we study the problem of energy sharing among microgrids with the goal of maximizing profit obtained from selling power while at the same time not deviating much from the customer demand. On the other hand, under shortage of power, this problem becomes one of deciding the amount of power to be bought with dynamically varying prices. On the demand side, we consider the problem of optimally scheduling the time-adjustable demand - i.e., of loads with flexible time windows in which they can be scheduled. While previous works have treated these two problems in isolation, we combine these problems together and provide a unified Markov decision process (MDP) framework for these problems. We then apply the Q-learning algorithm, a popular model-free reinforcement learning technique, to obtain the optimal policy. Through simulations, we show that the policy obtained by solving our MDP model provides more profit to the microgrids.

[1]  Anurag K. Srivastava,et al.  Controls for microgrids with storage: Review, challenges, and research needs , 2010 .

[2]  R. S. Milton,et al.  Reinforcement learning for optimal energy management of a solar microgrid , 2014, 2014 IEEE Global Humanitarian Technology Conference - South Asia Satellite (GHTC-SAS).

[3]  Walid Saad,et al.  Game-Theoretic Methods for the Smart Grid: An Overview of Microgrid Systems, Demand-Side Management, and Smart Grid Communications , 2012, IEEE Signal Processing Magazine.

[4]  Liuchen Chang,et al.  Multiagent-Based Hybrid Energy Management System for Microgrids , 2014, IEEE Transactions on Sustainable Energy.

[5]  Dong Yue,et al.  MAS-Based Management and Control Strategies for Integrated Hybrid Energy System , 2016, IEEE Transactions on Industrial Informatics.

[6]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[7]  Vivek S. Borkar,et al.  Learning Algorithms for Markov Decision Processes with Average Cost , 2001, SIAM J. Control. Optim..

[8]  Yuan Wu,et al.  Energy management of cooperative microgrids with P2P energy sharing in distribution networks , 2015, 2015 IEEE International Conference on Smart Grid Communications (SmartGridComm).

[9]  Osama A. Mohammed,et al.  Real-Time Implementation of Multiagent-Based Game Theory Reverse Auction Model for Microgrid Market Operation , 2015, IEEE Transactions on Smart Grid.

[10]  Lingfeng Wang,et al.  Smart charging and appliance scheduling approaches to demand side management , 2014 .

[11]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[12]  Grigoris Antoniou,et al.  Rule-Based Real-Time ADL Recognition in a Smart Home Environment , 2016, RuleML.

[13]  Thillainathan Logenthiran,et al.  Intelligent Control System for Microgrids Using Multiagent System , 2015, IEEE Journal of Emerging and Selected Topics in Power Electronics.

[14]  Cheng Wang,et al.  Energy-Sharing Model With Price-Based Demand Response for Microgrids of Peer-to-Peer Prosumers , 2017, IEEE Transactions on Power Systems.

[15]  Wilfried Elmenreich,et al.  The microgrid simulation tool RAPSim: Description and case study , 2014, 2014 IEEE Innovative Smart Grid Technologies - Asia (ISGT ASIA).

[16]  Zifa LIU,et al.  Distributed reinforcement learning to coordinate current sharing and voltage restoration for islanded DC microgrid , 2018 .

[17]  H. Farhangi,et al.  The path of the smart grid , 2010, IEEE Power and Energy Magazine.