Traversability on a simple humanoid: What did I just trip over?

The notion of affordance has taken the attention of roboticists in recent years. Previously we had used this concept to learn and perceive the traversability of a mobile robot platform. In this paper, we have shown how a simplistic humanoid robot equipped with time-of-flight ultrasonic sensor can learn traversability affordance. In addition to this, we have demonstrated it can infer how sensory data history affect this affordance by merging previously sensed data with the current data via a sliding data window concatenating recent history of sensor activity. This sliding window approach improves the performance of the system in the cases where the object is invisible at the time of collision. We performed several experiments in which the robot attained the ability to generalize what it has already learned by performing the move forward behavior robustly in cluttered environments with novel objects albeit noisy range measurements.

[1]  Manuel Lopes,et al.  Learning Object Affordances: From Sensory--Motor Coordination to Imitation , 2008, IEEE Transactions on Robotics.

[2]  Maya Cakmak,et al.  The learning and use of traversability affordance using range images on a mobile robot , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[3]  Dario Floreano,et al.  Coevolution of active vision and feature selection , 2004, Biological Cybernetics.

[4]  Morgan Quigley,et al.  ROS: an open-source Robot Operating System , 2009, ICRA 2009.

[5]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[6]  Larry D. Jackel,et al.  The DARPA LAGR program: Goals, challenges, methodology, and phase I results , 2006, J. Field Robotics.

[7]  H. A. Sedgwick Visual Space Perception , 2008 .

[8]  Ian H. Witten,et al.  The WEKA data mining software: an update , 2009, SKDD.

[9]  E. Reed The Ecological Approach to Visual Perception , 1989 .

[10]  Michael J. Turmon,et al.  Autonomous off-road navigation with end-to-end learning for the LAGR program , 2009 .

[11]  Roland Siegwart,et al.  Robot Navigation by Panoramic Vision and Attention Guided Fetaures , 2006, 18th International Conference on Pattern Recognition (ICPR'06).

[12]  J. Sinapov,et al.  Detecting the functional similarities between tools using a hierarchical representation of outcomes , 2008, 2008 7th IEEE International Conference on Development and Learning.

[13]  Igor Kononenko,et al.  Estimating Attributes: Analysis and Extensions of RELIEF , 1994, ECML.

[14]  Maya Cakmak,et al.  To Afford or Not to Afford: A New Formalization of Affordances Toward Affordance-Based Robot Control , 2007, Adapt. Behav..

[15]  Sen Zhang,et al.  Entropy based feature selection scheme for real time simultaneous localization and map building , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[16]  Danica Kragic,et al.  Birth of the Object: Detection of Objectness and Extraction of Object Shape through Object-Action complexes , 2008, Int. J. Humanoid Robotics.

[17]  Giulio Sandini,et al.  Learning about objects through action - initial steps towards artificial cognition , 2003, 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422).

[18]  Yan Ke,et al.  PCA-SIFT: a more distinctive representation for local image descriptors , 2004, CVPR 2004.