Reinforcement learning for hierarchical and modular neural network in autonomous robot navigation

This work describes an autonomous navigation system based on a modular neural network. The environment is unknown and initially the system does not have ability to balance two innate behaviors: target seeking and obstacle avoidance. As the robot experiences some collisions, the system improves its navigation strategy and efficiently guides the robot to targets. A reinforcement learning mechanism adjusts parameters of the neural networks at target capture and collision moments. Simulation experiments show performance comparisons. Only the proposed system reaches targets if the environment presents a high risk (dangerous) configuration (targets are very close to obstacles).

[1]  W. Pedrycz,et al.  An introduction to fuzzy sets : analysis and design , 1998 .

[2]  Paul F. M. J. Verschure,et al.  A bottom up approach towards the acquisition and expression of sequential representations applied to a behaving real-world device: Distributed Adaptive Control III , 1998, Neural Networks.

[3]  F.J. Von Zuben,et al.  A hierarchical neuro-fuzzy approach to autonomous navigation , 2002, Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290).

[4]  Marco Colombetti,et al.  Behavior analysis and training-a methodology for behavior engineering , 1996, IEEE Trans. Syst. Man Cybern. Part B.

[5]  R. Reder Cazangi,et al.  Simultaneous emergence of conflicting basic behaviors and their coordination in an evolutionary autonomous navigation system , 2002, Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No.02TH8600).

[6]  David C. Palmer,et al.  Learning and Complex Behavior , 1993 .

[7]  P.J. Antsaklis,et al.  Intelligent Learning Control , 1995, IEEE Control Systems.