A self-learning rule base for command following in dynamical systems
暂无分享,去创建一个
In this paper, a self-learning Rule Base for command following in dynamical systems is presented. The learning is accomplished though reinforcement learning using an associative memory called SAM. The main advantage of SAM is that it is a function approximator with explicit storage of training samples. A learning algorithm patterned after the dynamic programming is proposed. Two artificially created, unstable dynamical systems are used for testing, and the Rule Base was used to generate a feedback control to improve the command following ability of the otherwise uncontrolled systems. The numerical results are very encouraging. The controlled systems exhibit a more stable behavior and a better capability to follow reference commands. The rules resulting from the reinforcement learning are explicitly stored and they can be modified or augmented by human experts. Due to overlapping storage scheme of SAM, the stored rules are similar to fuzzy rules.
[1] Sheng Chen,et al. Representations of non-linear systems: the NARMAX model , 1989 .
[2] Lotfi A. Zadeh,et al. A Theory of Approximate Reasoning , 1979 .
[3] W. Cleveland. Robust Locally Weighted Regression and Smoothing Scatterplots , 1979 .
[4] J. Freidman,et al. Multivariate adaptive regression splines , 1991 .