How a mobile robot selects landmarks to make a decision based on an information criterion

Most current mobile robots are designed to determine their actions according to their positions. Before making a decision, they need to localize themselves. Thus, their observation strategies are mainly for self-localization. However, observation strategies should not only be for self-localization but also for decision making. We propose an observation strategy that enables a mobile robot to make a decision. It enables a robot equipped with a limited viewing angle camera to make decisions without self-localization. A robot can make a decision based on a decision tree and on prediction trees of observations constructed from its experiences. The trees are constructed based on an information criterion for the action decision, not for self-localization or state estimation. The experimental results with a four legged robot are shown and discussed.

[1]  Steen Kristensen,et al.  Sensor planning with Bayesian decision theory , 1995, Robotics Auton. Syst..

[2]  Yutaka Sakaguchi,et al.  Haptic sensing system with active perception , 1993, Adv. Robotics.

[3]  Dídac Busquets,et al.  Reinforcement learning for landmark-based robot navigation , 2002, AAMAS '02.

[4]  Wolfram Burgard,et al.  Active Markov localization for mobile robots , 1998, Robotics Auton. Syst..

[5]  Yoshiaki Shirai,et al.  On-line viewpoint and motion planning for efficient visual navigation under uncertainty , 1999, Robotics Auton. Syst..

[6]  H. Bruyninckx,et al.  Active Sensing for Robotics – A Survey , 2002 .

[7]  J. Ross Quinlan,et al.  C4.5: Programs for Machine Learning , 1992 .

[8]  Henrik I. Christensen,et al.  Toward task oriented localization , 2000 .

[9]  Avinash C. Kak,et al.  Planning sensing strategies in a robot work cell with multi-sensor capabilities , 1988, Proceedings. 1988 IEEE International Conference on Robotics and Automation.

[10]  Dana H. Ballard,et al.  Active Perception and Reinforcement Learning , 1990, Neural Computation.

[11]  Leslie Pack Kaelbling,et al.  Acting under uncertainty: discrete Bayesian models for mobile-robot navigation , 1996, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems. IROS '96.

[12]  G. Pottie,et al.  Entropy-based sensor selection heuristic for target localization , 2004, Third International Symposium on Information Processing in Sensor Networks, 2004. IPSN 2004.

[13]  Rudolph van der Merwe,et al.  The unscented Kalman filter for nonlinear estimation , 2000, Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373).

[14]  J. Yamamoto Dynamical Interactions between Learning , Visual Attention , and Behavior : An Experiment with a Vision-Based Mobile Robot , 1997 .

[15]  R. Andrew McCallum,et al.  Hidden state and reinforcement learning with instance-based state identification , 1996, IEEE Trans. Syst. Man Cybern. Part B.

[16]  Alberto Maria Segre,et al.  Programs for Machine Learning , 1994 .

[17]  Steven D. Whitehead,et al.  A Complexity Analysis of Cooperative Mechanisms in Reinforcement Learning , 1991, AAAI.