Adaptive behavior navigation of a mobile robot

Describes a neural network model for the reactive behavioral navigation of a mobile robot. From the information received through the sensors the robot can elicit one of several behaviors (e.g., stop, avoid, stroll, wall following), through a competitive neural network. The robot is able to develop a control strategy depending on sensor information and learning operation. Reinforcement learning improves the navigation of the robot by adapting the eligibility of the behaviors and determining the linear and angular robot velocities.

[1]  Andreas Kurz Constructing maps for mobile robot navigation based on ultrasonic range data , 1996, IEEE Trans. Syst. Man Cybern. Part B.

[2]  Allen M. Waxman,et al.  Visual learning, adaptive expectations, and behavioral conditioning of the mobile robot MAVIN , 1991, Neural Networks.

[3]  Stephen Grossberg,et al.  The Vite Model: A Neural Command Circuit for Generating Arm and Articulator Trajectories, , 1988 .

[4]  George A. Bekey,et al.  A reinforcement-learning approach to reactive control policy design for autonomous robots , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[5]  P. Verschure,et al.  Adaptive fields: distributed representations of classically conditioned associations , 1991 .

[6]  Sridhar Mahadevan,et al.  Automatic Programming of Behavior-Based Robots Using Reinforcement Learning , 1991, Artif. Intell..

[7]  Juan López Coronado,et al.  A real-time, unsupervised neural network for the low-level control of a mobile robot in a nonstationary environment , 1995, Neural Networks.

[8]  Juan López Coronado,et al.  A Model of Operant Conditioning for Adaptive Obstacle Avoidance , 1996 .

[9]  Paul F. M. J. Verschure,et al.  Categorization, representations, and the dynamics of system-environment interaction: a case study in autonomous systems , 1993 .

[10]  Minoru Asada,et al.  Vision-based reinforcement learning for purposive behavior acquisition , 1995, Proceedings of 1995 IEEE International Conference on Robotics and Automation.

[11]  Pattie Maes,et al.  Situated agents can have goals , 1990, Robotics Auton. Syst..

[12]  Andreas Bühlmeier,et al.  7 – Operant Conditioning in Robots , 1997 .

[13]  Hyung Suck Cho,et al.  A sensor-based navigation for a mobile robot using fuzzy logic and reinforcement learning , 1995, IEEE Trans. Syst. Man Cybern..

[14]  DRAFT. April , 2004 .

[15]  S. Grossberg Contour Enhancement , Short Term Memory , and Constancies in Reverberating Neural Networks , 1973 .

[16]  Teuvo Kohonen,et al.  The self-organizing map , 1990, Neurocomputing.

[17]  Luis Moreno,et al.  Learning Emergent Tasks for an Autonomous Mobile Robot , 1994, IROS.

[18]  A G Barto,et al.  Toward a modern theory of adaptive networks: expectation and prediction. , 1981, Psychological review.

[19]  José del R. Millán,et al.  Learning efficient reactive behavioral sequences from basic reflexes in a goal-directed autonomous robot , 1994 .