Autonomous Data-Driven Decision-Making in Smart Electricity Markets

For the vision of a Smart Grid to materialize, substantial advances in intelligent decentralized control mechanisms are required. We propose a novel class of autonomous broker agents for retail electricity trading that can operate in a wide range of Smart Electricity Markets, and that are capable of deriving long-term, profit-maximizing policies. Our brokers use Reinforcement Learning with function approximation, they can accommodate arbitrary economic signals from their environments, and they learn efficiently over the large state spaces resulting from these signals. Our design is the first that can accommodate an offline training phase so as to automatically optimize the broker for particular market conditions. We demonstrate the performance of our design in a series of experiments using real-world energy market data, and find that it outperforms previous approaches by a significant margin.

[1]  Manuela M. Veloso,et al.  Strategy Learning for Autonomous Agents in Smart Grid Markets , 2011, IJCAI.

[2]  Manuela M. Veloso,et al.  Learned Behaviors of Multiple Autonomous Agents in Smart Grid Markets , 2011, AAAI.

[3]  Bart De Schutter,et al.  Reinforcement Learning and Dynamic Programming Using Function Approximators , 2010 .

[4]  Habib Rajabi Mashhadi,et al.  An Adaptive $Q$-Learning Algorithm Developed for Agent-Based Computational Modeling of Electricity Market , 2010, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[5]  S. A. Soman,et al.  Application of Actor-Critic Learning Algorithm for Optimal Bidding Problem of a Genco , 2002, IEEE Power Engineering Review.

[6]  Martin Bichler,et al.  Designing smart markets , 2010 .

[7]  Csaba Szepesvári,et al.  Algorithms for Reinforcement Learning , 2010, Synthesis Lectures on Artificial Intelligence and Machine Learning.

[8]  Richard S. Sutton,et al.  Introduction to Reinforcement Learning , 1998 .

[9]  Wolfgang Ketter,et al.  The Power Trading Agent Competition , 2011 .

[10]  M. M. De Weerdt,et al.  Pricing mechanism for real-time balancing in regional electricity markets , 2011, TADA 2011.

[11]  Shimon Whiteson,et al.  Protecting against evaluation overfitting in empirical reinforcement learning , 2011, 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL).

[12]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[13]  Peter Stone,et al.  Adaptive Auction Mechanism Design and the Incorporation of Prior Knowledge , 2010, INFORMS J. Comput..

[14]  Tao Xiong,et al.  A combined SVM and LDA approach for classification , 2005, Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005..

[15]  Csaba Szepesv Algorithms for Reinforcement Learning , 2010 .

[16]  Paul J. Werbos,et al.  Putting more brain-like intelligence into the electric power grid: What we need and how to do it , 2009, 2009 International Joint Conference on Neural Networks.

[17]  Wolfgang Ketter,et al.  Demand side management—A simulation of household behavior under variable prices , 2011 .

[18]  A. Rosenfeld,et al.  An exploratory analysis of California residential customer response to critical peak pricing of electricity , 2007 .

[19]  Martin Bichler,et al.  Research Commentary - Designing Smart Markets , 2010, Inf. Syst. Res..