Learning to understand tasks for mobile robots

We propose a way to represent the environment by storing observations taken in that environment, together with their task-related 'values'. This representation allows for robots being taught by human instructors based on rewards and punishments. We show that the robot is able to learn to execute different tasks. The results of training can be interpreted to gain understanding about the environment in which the robot has to operate. So instead of first modeling the environment and using this to execute the task, we first start by learning to execute the task and use this to obtain knowledge about the environment.

[1]  Benjamin Kuipers,et al.  Learning to Explore and Build Maps , 1994, AAAI.

[2]  Gillian M. Hayes,et al.  A Robot Controller Using Learning by Imitation , 1994 .

[3]  Pattie Maes,et al.  Spatial exploration, map learning, and self-positioning with MonaLysa , 1996 .

[4]  Bernhard Schölkopf,et al.  Learning View Graphs for Robot Navigation , 1997, AGENTS '97.

[5]  Maja J. Mataric,et al.  Reinforcement Learning in the Multi-Robot Domain , 1997, Auton. Robots.

[6]  Henrik I. Christensen,et al.  Behaviour Coordination in Structured Environments , 2022 .

[7]  James L. Crowley,et al.  Appearance based processes for visual navigation , 1997 .

[8]  Wolfram Burgard,et al.  Integrating Topological and Metric Maps for Mobile Robot Navigation: A Statistical Approach , 1998, AAAI/IAAI.

[9]  Monica Nicolescu,et al.  Learning task representations from experienced demonstrations , 2001 .

[10]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[11]  Richard S. Sutton,et al.  Generalization in ReinforcementLearning : Successful Examples UsingSparse Coarse , 1996 .

[12]  Roland Siegwart,et al.  Hybrid simultaneous localization and map building: closing the loop with multi-hypotheses tracking , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[13]  Monica N. Nicolescu,et al.  Learning and interacting in human-robot domains , 2001, IEEE Trans. Syst. Man Cybern. Part A.

[14]  Andrew W. Moore,et al.  Reinforcement Learning: A Survey , 1996, J. Artif. Intell. Res..

[15]  Christian Balkenius Spatial learning with perceptually grounded representations , 1998, Robotics Auton. Syst..

[16]  Sorin Moga,et al.  From Perception-Action Loops to Imitation Processes: A Bottom-Up Approach of Learning by Imitation , 1998, Appl. Artif. Intell..

[17]  S.H.G. ten Hagen,et al.  Learning to navigate using a lazy map , 2003 .

[18]  Stefan Schaal,et al.  Robot Learning From Demonstration , 1997, ICML.

[19]  José del R. Millán,et al.  Using Machine Learning Techniques in Real-World Mobile Robots , 1995, IEEE Expert.

[20]  S.H.G. ten Hagen Good features to map , 2003, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).

[21]  L. Seabra Lopes,et al.  Towards grounded human-robot communication , 2002, Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication.

[22]  Jean-Arcady Meyer,et al.  Animat navigation using a cognitive graph , 1998, Biological Cybernetics.