Q-learning with generalisation: an architecture for real-world reinforcement learning in a mobile robot
暂无分享,去创建一个
It is noted that the time constraints imposed by using real robots rather than simulations are so severe that only architectures giving learning which is efficient in terms of elapsed time and number of trials can be used. Arguments are presented to support the view that multilayer perceptrons are inappropriate because of the extent to which new learning interferes with old learning. The structure of C.J.C.H. Watkins's Q-learning (1989), a discrete-state and discrete-time reinforcement learning scheme closely related to dynamic programming and capable of a connectionist interpretation, is shown to be suitable, and refinements are proposed to permit generalization and to further protect information. A simple representation of the unlearned components of internal states (perception-action sequence, or PAS, encoding) in terms of the recent history of perceptions and actions is proposed for use in navigating between landmarks in environments where landmarks are rare. A recently developed behavior-based mobile robot (FRANK) is described which has a neurally based perception mechanism known to operate reliably in an unstructured human environment and an onboard computer to implement the modified Q algorithm and PAS encoding.<<ETX>>
[1] David J. Reinkensmeyer,et al. Using associative content-addressable memories to control robots , 1989, Proceedings, 1989 International Conference on Robotics and Automation.
[2] Andrew W. Moore,et al. Knowledge of knowledge and intelligent experimentation for learning control , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.
[3] Owen Holland,et al. The neural control of locomotion in a quadrupedal robot , 1991 .
[4] Richard S. Sutton,et al. Reinforcement learning architectures for animats , 1991 .