A Top-Down and Bottom-Up Visual Attention Model for Humanoid Object Approaching and Obstacle Avoidance

Most of the research on humanoid walk tasks has considered a global representation of the scene that frequently relies on external sensors. This is detrimental to the autonomy and the reactivity of the agent under unknown or changing scenarios. Ego-centric localization has been less explored, and the works considering on-board acquisitions have mostly dealt with tasks under controlled scenarios where the path to the object is cleared from obstacles. In this work a behavior-based control scheme is proposed, so the robot Nao can approach and position in relation to a given face of an object, while avoiding obstacles. For this, the solution relies on top-down (color-based) and bottom-up (optic-flow-based) visual features, and proprioceptive information registered on-board. The model is decentralized and exploits the emergent aspect of behavior from the independent contribution of a walk and a look-at task. An embodied visual encoding approach is proposed to support the arbitration between competing behavioral modes.

[1]  S. Hutchinson,et al.  Visual Servo Control Part II : Advanced Approaches , 2007 .

[2]  Christine Chevallereau,et al.  Grounding humanoid visually guided walking: From action-independent to action-oriented knowledge , 2016, Inf. Sci..

[3]  Karsten Berns,et al.  Control of facial expressions of the humanoid robot head ROMAN , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Takeo Kanade,et al.  Vision-guided humanoid footstep planning for dynamic environments , 2005, 5th IEEE-RAS International Conference on Humanoid Robots, 2005..

[5]  Manuela M. Veloso,et al.  A Modular Hierarchical Behavior-Based Architecture , 2001, RoboCup.

[6]  Karsten Berns,et al.  Behaviour-Based Off-Road Robot Navigation , 2011, KI - Künstliche Intelligenz.

[7]  David Chapman,et al.  Planning for Conjunctive Goals , 1987, Artif. Intell..

[8]  Paolo Pirjanian An Overview of System Architectures for Action Selection in Mobile Robotics , 1997 .

[9]  Maja J. Matari,et al.  Behavior-based Control: Examples from Navigation, Learning, and Group Behavior , 1997 .

[10]  Maren Bennewitz,et al.  Humanoid robot localization in complex indoor environments , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[11]  Gunnar Farnebäck,et al.  Two-Frame Motion Estimation Based on Polynomial Expansion , 2003, SCIA.

[12]  Tamim Asfour,et al.  A cognitive architecture for a humanoid robot: a first approach , 2005, 5th IEEE-RAS International Conference on Humanoid Robots, 2005..

[13]  Maja J. Mataric,et al.  Behaviour-based control: examples from navigation, learning, and group behaviour , 1997, J. Exp. Theor. Artif. Intell..

[14]  Christine Chevallereau,et al.  Embodied localization in visually-guided walk of humanoid robots , 2014, 2014 11th International Conference on Informatics in Control, Automation and Robotics (ICINCO).

[15]  Karsten Berns,et al.  Development of complex robotic systems using the behavior-based control architecture iB2C , 2010, Robotics Auton. Syst..

[16]  Rodney A. Brooks,et al.  Cambrian Intelligence: The Early History of the New AI , 1999 .

[17]  François Chaumette,et al.  Visual servo control. II. Advanced approaches [Tutorial] , 2007, IEEE Robotics & Automation Magazine.