Modeling of Suppliers' Learning Behaviors in an Electricity Market Environment

The day-ahead electricity market is modeled as a multi-agent system with interacting agents including supplier agents, load serving entities, and a market operator. Simulation of the market clearing results under the scenario in which agents have learning capabilities is compared with the scenario where agents report true marginal costs. It is shown that, with Q-Learning, electricity suppliers are making more profits compared to the scenario without learning due to strategic gaming. As a result, the LMP at each bus is substantially higher.

[1]  Peter Dayan,et al.  Q-learning , 1992, Machine Learning.

[2]  Wang Jing,et al.  Simulation of Large Customer Price Response Under Time-of-Use Electricity Pricing Based on Multi-Agent System , 2006, 2006 International Conference on Power System Technology.

[3]  Daniel Ashlock,et al.  Comprehensive bidding strategies with genetic programming/finite state automata , 1999 .

[4]  Vijay Vittal,et al.  Adaptation in load shedding under vulnerable operating conditions , 2002 .

[5]  Yoav Shoham,et al.  If multi-agent learning is the answer, what is the question? , 2007, Artif. Intell..

[6]  T. Das,et al.  A Reinforcement Learning Model to Assess Market Power Under Auction-Based Energy Pricing , 2007, IEEE Transactions on Power Systems.

[7]  Michael Winikoff,et al.  Developing intelligent agent systems - a practical guide , 2004, Wiley series in agent technology.

[8]  L. Tesfatsion,et al.  Dynamic Testing of Wholesale Power Market Designs: An Open-Source Agent-Based Framework , 2007 .

[9]  S.D.J. McArthur,et al.  Multi-agent systems for diagnostic and condition monitoring applications , 2003, IEEE Power Engineering Society General Meeting, 2004..

[10]  A.L. Dimeas,et al.  Operation of a multiagent system for microgrid control , 2005, IEEE Transactions on Power Systems.

[11]  Li Liu,et al.  Fault detection, diagnostics, and prognostics: software agent solutions , 2005, IEEE Electric Ship Technologies Symposium, 2005..

[12]  C. Watkins Learning from delayed rewards , 1989 .

[13]  Jerome Yen,et al.  A decentralized approach for optimal wholesale cross-border trade planning using multi-agent technology , 2001 .

[14]  S. Borenstein,et al.  Measuring Market Inefficiencies in California's Restructured Wholesale Electricity Market , 2002 .

[15]  Timothy D. Mount,et al.  Testing the Effects of Holding Forward Contracts On the Behavior of Suppliers in an Electricity Auction , 2005, Proceedings of the 38th Annual Hawaii International Conference on System Sciences.

[16]  Using software agents to test electric markets and systems , 2005, IEEE Power Engineering Society General Meeting, 2005.

[17]  T. Nagata,et al.  A multi-agent approach to power system restoration , 2002 .

[18]  Michael Wooldridge,et al.  Agent technology: foundations, applications, and markets , 1998 .

[19]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[20]  E. Gnansounou,et al.  Market oriented planning of power generation expansion using agent-based model , 2004, IEEE PES Power Systems Conference and Exposition, 2004..

[21]  David M. Kreps,et al.  Learning Mixed Equilibria , 1993 .

[22]  Agostino Poggi,et al.  Developing Multi-agent Systems with JADE , 2007, ATAL.

[23]  J.R. McDonald,et al.  An agent-based anomaly detection architecture for condition monitoring , 2005, IEEE Transactions on Power Systems.

[24]  Nicholas R. Jennings,et al.  Intelligent agents: theory and practice , 1995, The Knowledge Engineering Review.

[25]  Michael P. Wellman,et al.  Nash Q-Learning for General-Sum Stochastic Games , 2003, J. Mach. Learn. Res..

[26]  J.A. Momoh,et al.  Navy ship power system restoration using multi-agent approach , 2006, 2006 IEEE Power Engineering Society General Meeting.