The previous models of artificial neural networks for control do not use the existing knowledge of a physical system's behavior and train the network from scratch. The learning process is usually long, and even after the learing is completed, the resulting network can not be easily explained. On the other hand, approximate reasoning-based controllers provide a clear understanding of the control strategy but can not learn from experience. In this paper, we introduce a new method for learning to refine the control rules of approximate reasoning-based controllers. A reinforcement learning technique is used in conjunction with a multi-layer neural network model of an approximate reasoning-based controller. The model learns by updating its prediction of the physical system's behavior. Unlike previous models, our model can use the control knowledge of an experienced operator and fine-tune it through the process of learning. We discuss some of the space domains suitable for application of the new model such as rendezvous and docking, camera tracking, and tethered systems control.
[1]
Robert N. Lea,et al.
Fuzzy logic control for camera tracking system
,
1992
.
[2]
Yung-Yaw Chen,et al.
An Experiment-based Comparative Study of Fuzzy Logic Control
,
1989,
1989 American Control Conference.
[3]
Jyh-Shing Roger Jang,et al.
A hierarchical approach to designing approximate reasoning-based controllers for dynamic physical systems
,
1990,
UAI.
[4]
Richard S. Sutton,et al.
Neuronlike adaptive elements that can solve difficult learning control problems
,
1983,
IEEE Transactions on Systems, Man, and Cybernetics.
[5]
Dan Nowlan.
Tethered systems control
,
1990
.
[6]
Hamid R. Berenji,et al.
Approximate reasoning-based learning and control for proximity operations and docking in space
,
1991
.