How Deep Is Deep Enough for Deep Belief Network for Approximating Model Predictive Control Law

Deep belief network (DBN) is an effective learning model based on deep learning. It can hierarchically transform the input data via stacked feature detectors. As a predictive model, DBN has shown a promising prospect in model predictive control (MPC). However, its successful application relies seriously on the suitable structure size (the numbers of hidden layers and neurons), which is challenging to determine. In this work, we present a theoretical bound on its minimum structure size in order to accurately approximate a desired control law of MPC, called DBN-MPC. First, according to the Markov assumption, a controlled system is equivalent to a quadratic program, which only depends on the current system state. Second, a universal theorem is proposed to give a bound on the minimum structure size of DBN from the perspective of piecewise affine function analysis. Third, a partial least square regression is used to fine-tune DBN to overcome the problems of local-minimum and time-consuming training process. Finally, we demonstrate the effectiveness of the proposed method through two classical experiments: 1) tracking control of a benchmark dynamical system and 2) temperature control of a practical second-order continuous stirred tank reactor (CSTR) system. The experimental results generally give an answer to the question that how deep is deep enough for DBN to approximate an MPC law.