RL-Controller: a reinforcement learning framework for active structural control

To maintain structural integrity and functionality during the designed life cycle of a structure, engineers are expected to accommodate for natural hazards as well as operational load levels. Active control systems are an efficient solution for structural response control when a structure is subjected to unexpected extreme loads. However, development of these systems through traditional means is limited by their model dependent nature. Recent advancements in adaptive learning methods, in particular, reinforcement learning (RL), for realtime decision making problems, along with rapid growth in high-performance computational resources, help structural engineers to transform the classic model-based active control problem to a purely data-driven one. In this paper, we present a novel RL-based approach for designing active controllers by introducing RL-Controller, a flexible and scalable simulation environment. The RL-Controller includes attributes and functionalities that are defined to model active structural control mechanisms in detail. We show that the proposed framework is easily trainable for a five story benchmark building with 65% reductions on average in inter story drifts (ISD) when subjected to strong ground motions. In a comparative study with LQG active control method, we demonstrate that the proposed model-free algorithm learns more optimal actuator forcing strategies that yield higher performance, e.g., 25% more ISD reductions on average with respect to LQG, without using prior information about the mechanical properties of the system.

[1]  Yang Shi,et al.  Robust finite frequency H∞ static-output-feedback control with application to vibration active control of structural systems , 2014 .

[2]  Jamshid Ghaboussi,et al.  Active Control of Structures Using Neural Networks , 1995 .

[3]  Hae Young Noh,et al.  Updating Structural Parameters with Spatially Incomplete Measurements Using Subspace System Identification , 2017 .

[4]  Alec Radford,et al.  Proximal Policy Optimization Algorithms , 2017, ArXiv.

[5]  T. T. Soong,et al.  State-of-the-art review: Active structural control in civil engineering , 1988 .

[6]  Thomas W. Strganac,et al.  Applied Active Control for a Nonlinear Aeroelastic Structure , 1998 .

[7]  Stephen J. Elliott A review of active noise and vibration control in road vehicles , 2008 .

[8]  Timothy Gordon,et al.  Continuous action reinforcement learning applied to vehicle suspension control , 1997 .

[9]  Sinan Korkmaz,et al.  Review: A review of active structural control: challenges for engineering informatics , 2011 .

[10]  T. T. Soong,et al.  DESIGN OF AN ACTIVE MASS DAMPER FOR A TALL TV TOWER IN NANJING, CHINA , 1998 .

[11]  G Chen,et al.  MR damper and its application for semi-active control of vehicle suspension system , 2002 .

[12]  B. Sp,et al.  State of the Art of Structural Control , 2003 .

[13]  José Rodellar,et al.  Active and semi-active control of structures – theory and applications: A review of recent advances , 2012 .

[14]  Sergey Levine,et al.  Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor , 2018, ICML.

[15]  S. Pourzeynali,et al.  Active control of high rise building structures using fuzzy logic and genetic algorithms , 2007 .

[16]  Shirley J. Dyke,et al.  Benchmark problems in structural control : Part II : Active tendon system , 1998 .

[17]  Ronald L. Mayes,et al.  Seismic Isolation: History, Application, and Performance—A World View , 1990 .

[18]  Jan Peters,et al.  Reinforcement learning in robotics: A survey , 2013, Int. J. Robotics Res..

[19]  Fabio Mazza,et al.  Control of the earthquake and wind dynamic response of steel‐framed buildings by using additional braces and/or viscoelastic dampers , 2011 .

[20]  Liu Yang,et al.  Reinforcement learning for bluff body active flow control in experiments and simulations , 2020, Proceedings of the National Academy of Sciences.

[21]  Ming Liu,et al.  Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[22]  Marina Schroder,et al.  Structural Control Past Present And Future , 2016 .

[23]  Mehdi Soleymani,et al.  Online control of an active seismic system via reinforcement learning , 2018, Structural Control and Health Monitoring.

[24]  Ian F. C. Smith,et al.  Reinforcement Learning for Structural Control , 2008 .

[25]  Yan Wang,et al.  Robust active flow control over a range of Reynolds numbers using an artificial neural network trained through deep reinforcement learning. , 2020 .

[26]  Dan Zhao,et al.  A review of active control approaches in stabilizing combustion systems in aerospace industry , 2018 .

[27]  Sergey Levine,et al.  QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation , 2018, CoRL.

[28]  M. Balas Direct Velocity Feedback Control of Large Space Structures , 1979 .

[29]  Chang-Hee Won,et al.  Reliability-based measures of structural control robustness , 1994 .

[30]  Takuji Kobori,et al.  Active mass driver system as the first application of active structural control , 2001 .

[31]  Wojciech Zaremba,et al.  OpenAI Gym , 2016, ArXiv.

[32]  Moon K. Kwak,et al.  Active vibration control of structure by Active Mass Damper and Multi-Modal Negative Acceleration Feedback control algorithm , 2017 .

[33]  Martin A. Riedmiller,et al.  Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards , 2017, ArXiv.

[34]  Sergey Levine,et al.  Learning to Walk via Deep Reinforcement Learning , 2018, Robotics: Science and Systems.

[35]  Arcan Yanik,et al.  Wavelet PSO-Based LQR Algorithm for Optimal Structural Control Using Active Tuned Mass Dampers , 2015 .