An Edge-Cloud Integrated Solution for Buildings Demand Response Using Reinforcement Learning

Buildings, as major energy consumers, can provide great untapped demand response (DR) resources for grid services. However, their participation remains low in real-life. One major impediment for popularizing DR in buildings is the lack of cost-effective automation systems that can be widely adopted. Existing optimization-based smart building control algorithms suffer from high costs on both building-specific modeling and on-demand computing resources. To tackle these issues, this paper proposes a cost-effective edge-cloud integrated solution using reinforcement learning (RL). Beside RL’s ability to solve sequential optimal decision-making problems, its adaptability to easy-to-obtain building models and the off-line learning feature are likely to reduce the controller’s implementation cost. Using a surrogate building model learned automatically from building operation data, an RL agent learns an optimal control policy on cloud infrastructure, and the policy is then distributed to edge devices for execution. Simulation results demonstrate the control efficacy and the learning efficiency in buildings of different sizes. A preliminary cost analysis on a 4-zone commercial building shows the annual cost for optimal policy training is only 2.25% of the DR incentive received. Results of this study show a possible approach with higher return on investment for buildings to participate in DR programs.

[1]  Saifur Rahman,et al.  A self-learning algorithm for coordinated control of rooftop units in small- and medium-sized commercial buildings , 2017 .

[2]  Zheng Wen,et al.  Optimal Demand Response Using Device-Based Reinforcement Learning , 2014, IEEE Transactions on Smart Grid.

[3]  Antonio Liotta,et al.  On-Line Building Energy Optimization Using Deep Reinforcement Learning , 2017, IEEE Transactions on Smart Grid.

[4]  Jianqiang Yi,et al.  Building Energy Consumption Prediction: An Extreme Deep Learning Approach , 2017 .

[5]  Shu Lin,et al.  Model Predictive Control — Status and Challenges , 2013 .

[6]  Mohammed H. Albadi,et al.  A summary of demand response in electricity markets , 2008 .

[7]  Lukas Ferkl,et al.  Model predictive control of a building heating system: The first experience , 2011 .

[8]  Yuanyuan Shi,et al.  Optimal Control Via Neural Networks: A Convex Approach , 2018, ICLR.

[9]  Saifur Rahman,et al.  Day-ahead building-level load forecasts using deep learning vs. traditional time-series techniques , 2019, Applied Energy.

[10]  Fu Xiao,et al.  A short-term building cooling load prediction method using deep learning algorithms , 2017 .

[11]  Alberto Bemporad,et al.  A survey on explicit model predictive control , 2009 .

[12]  Saifur Rahman,et al.  A Power Disaggregation Approach to Identify Power-Temperature Models of HVAC Units , 2018, 2018 IEEE International Smart Cities Conference (ISC2).

[13]  Philip Haves,et al.  Model predictive control for the operation of building cooling systems , 2010, Proceedings of the 2010 American Control Conference.

[14]  Hanchen Xu,et al.  Arbitrage of Energy Storage in Electricity Markets with Deep Reinforcement Learning , 2019, ArXiv.

[15]  James E. Braun,et al.  A general multi-agent control approach for building energy system optimization , 2016 .

[16]  J. E. Brauna,et al.  Development and Experimental Demonstration of a Plug-and-Play Multiple RTU Coordination Control Algorithm for Small / Medium Commercial Buildings , 2015 .

[17]  Daniel L. Marino,et al.  Building energy load forecasting using Deep Neural Networks , 2016, IECON 2016 - 42nd Annual Conference of the IEEE Industrial Electronics Society.

[18]  Tao Chen,et al.  An IoT-Based Thermal Model Learning Framework for Smart Buildings , 2020, IEEE Internet of Things Journal.

[19]  Hanchen Xu,et al.  Deep Reinforcement Learning for Joint Bidding and Pricing of Load Serving Entity , 2019, IEEE Transactions on Smart Grid.

[20]  Khee Poh Lam,et al.  Whole building energy model for HVAC optimal control: A practical framework based on deep reinforcement learning , 2019, Energy and Buildings.

[21]  David Budden,et al.  Distributed Prioritized Experience Replay , 2018, ICLR.

[22]  Wojciech Zaremba,et al.  OpenAI Gym , 2016, ArXiv.

[23]  Tianshu Wei,et al.  Deep reinforcement learning for building HVAC control , 2017, 2017 54th ACM/EDAC/IEEE Design Automation Conference (DAC).

[24]  J. Oyarzabal,et al.  A Direct Load Control Model for Virtual Power Plant Management , 2009, IEEE Transactions on Power Systems.

[25]  Alex Graves,et al.  Asynchronous Methods for Deep Reinforcement Learning , 2016, ICML.

[26]  Li Xia,et al.  Satisfaction based Q-learning for integrated lighting and blind control , 2016 .

[27]  Renke Huang,et al.  Adaptive Power System Emergency Control Using Deep Reinforcement Learning , 2019, IEEE Transactions on Smart Grid.

[28]  Jin Dong,et al.  Coordination and Control of Building HVAC Systems to Provide Frequency Regulation to the Electric Grid , 2018, Energies.

[29]  Bart De Schutter,et al.  Adaptive Cruise Control for a SMART Car: A Comparison Benchmark for MPC-PWA Control Methods , 2008, IEEE Transactions on Control Systems Technology.

[30]  Yunsi Fei,et al.  Smart Home in Smart Microgrid: A Cost-Effective Energy Ecosystem With Intelligent Hierarchical Agents , 2015, IEEE Transactions on Smart Grid.

[31]  Lei Yang,et al.  Reinforcement learning for optimal control of low exergy buildings , 2015 .