Managing Power Flows in Microgrids Using Multi-Agent Reinforcement Learning

Smart Microgrids bring numerous challenges, including how to leverage the potential benefits of renewable energy sources while maintaining acceptable levels of reliability in the power infrastructure. One way to tackle this challenging problem is to use intelligent storage systems (batteries and supercapacitors). Charging and discharging them at the proper time by exploiting the variablity of the renewable energy sources guarantees to balance supply and demand at any time. Reinforcement Learning (RL) is a branch of artificial intelligence encompassing techniques that allow agents (in our case electrical devices) to learn to behave rationnally, that is to perform sequences of decisions in order to optimize a given performance criteria. The theoretically sound framework of Reinforcement Learning makes these techniques to be increasingly used for solving difficult control problems. In this paper, a multi-agent reinforcement learning technique is proposed as an exploratory approach for controling a gridtied microgrid in a fully distributed manner, using multiple energy storage units and the grid. Preliminary simulation results using different scenarios show the feasibility and validity of the approach on a test microgrid, and open the way for future work in the field of agent-based learning control strategies in Smart Microgrids.

[1]  James S. Albus,et al.  New Approach to Manipulator Control: The Cerebellar Model Articulation Controller (CMAC)1 , 1975 .

[2]  Maja J. Mataric,et al.  Reinforcement Learning in the Multi-Robot Domain , 1997, Auton. Robots.

[3]  Gerhard Weiss,et al.  Multiagent systems: a modern approach to distributed artificial intelligence , 1999 .

[4]  G.B. Sheble,et al.  Predatory gaming strategies for electric power markets , 2000, DRPT2000. International Conference on Electric Utility Deregulation and Restructuring and Power Technologies. Proceedings (Cat. No.00EX382).

[5]  Barbara Messing,et al.  An Introduction to MultiAgent Systems , 2002, Künstliche Intell..

[6]  L. Kaelbling,et al.  Mobilized ad-hoc networks: a reinforcement learning approach , 2004, International Conference on Autonomic Computing, 2004. Proceedings..

[7]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[8]  S.D.J. McArthur,et al.  Multi-Agent Systems for Power Engineering Applications—Part I: Concepts, Approaches, and Technical Challenges , 2007, IEEE Transactions on Power Systems.

[9]  S.D.J. McArthur,et al.  Multi-Agent Systems for Power Engineering Applications—Part II: Technologies, Standards, and Tools for Building Multi-agent Systems , 2007, IEEE Transactions on Power Systems.

[10]  Yoav Shoham,et al.  Multiagent Systems - Algorithmic, Game-Theoretic, and Logical Foundations , 2009 .

[11]  Shimon Whiteson,et al.  Multiagent Reinforcement Learning for Urban Traffic Control Using Coordination Graphs , 2008, ECML/PKDD.

[12]  Victor R. Lesser,et al.  Integrating organizational control into multi-agent learning , 2009, AAMAS.

[13]  P. Marannino,et al.  Bidding strategies in day-ahead energy markets: System marginal price vs. pay as bid , 2010, 2010 7th International Conference on the European Energy Market.

[14]  Abder Koukam,et al.  Multi-agent systems for grid energy management: A short review , 2010, IECON 2010 - 36th Annual Conference on IEEE Industrial Electronics Society.

[15]  Sarvapali D. Ramchurn,et al.  Agent-based micro-storage management for the Smart Grid , 2010, AAMAS.

[16]  Bart De Schutter,et al.  Reinforcement Learning and Dynamic Programming Using Function Approximators , 2010 .

[17]  N. D. Hatziargyriou,et al.  Multi-agent reinforcement learning for microgrids , 2010, IEEE PES General Meeting.

[18]  Sarvapali D. Ramchurn,et al.  Agent-based homeostatic control for green energy in the smart grid , 2011, TIST.

[19]  Pengcheng Zhang,et al.  A novel multi-agent reinforcement learning approach for job scheduling in Grid computing , 2011, Future Gener. Comput. Syst..