Controlling gaze with an embodied interactive control architecture

Human-Robot Interaction (HRI) is a growing field of research that targets the development of robots which are easy to operate, more engaging and more entertaining. Natural human-like behavior is considered by many researchers as an important target of HRI. Research in Human-Human communications revealed that gaze control is one of the major interactive behaviors used by humans in close encounters. Human-like gaze control is then one of the important behaviors that a robot should have in order to provide natural interactions with human partners. To develop human-like natural gaze control that can integrate easily with other behaviors of the robot, a flexible robotic architecture is needed. Most robotic architectures available were developed with autonomous robots in mind. Although robots developed for HRI are usually autonomous, their autonomy is combined with interactivity, which adds more challenges on the design of the robotic architectures supporting them. This paper reports the development and evaluation of two gaze controllers using a new cross-platform robotic architecture for HRI applications called EICA (The Embodied Interactive Control Architecture), that was designed to meet those challenges emphasizing how low level attention focusing and action integration are implemented. Evaluation of the gaze controllers revealed human-like behavior in terms of mutual attention, gaze toward partner, and mutual gaze. The paper also reports a novel Floating Point Genetic Algorithm (FPGA) for learning the parameters of various processes of the gaze controller.

[1]  Monica N. Nicolescu,et al.  A hierarchical architecture for behavior-based robots , 2002, AAMAS '02.

[2]  B. Yegnanarayana,et al.  Genetic-algorithm-based optimal power flow for security enhancement , 2005 .

[3]  Toyoaki Nishida,et al.  Toward combining autonomy and interactivity for social robots , 2009, AI & SOCIETY.

[4]  Dan Gusfield Algorithms on Strings, Trees, and Sequences - Computer Science and Computational Biology , 1997 .

[5]  S. Das,et al.  Floating-point genetic algorithm for design of a reconfigurable antenna arrays by phase-only control , 2005, 2005 Asia-Pacific Microwave Conference Proceedings.

[6]  Rainer Stiefelhagen,et al.  Head pose estimation using stereo vision for human-robot interaction , 2004, Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings..

[7]  Matthew W. Hoffman,et al.  A probabilistic model of gaze imitation and shared attention , 2006, Neural Networks.

[8]  Toyoaki Nishida,et al.  A new, HRI inspired, view of intention and intention communication , 2007, AAAI 2007.

[9]  Fernando J. Ballesteros,et al.  Where to Look , 2010 .

[10]  Tetsuo Ono,et al.  Robovie: an interactive humanoid robot , 2001 .

[11]  Toyoaki Nishida,et al.  Intention Through Interaction: Toward Mutual Intention in Real World Interactions , 2007, IEA/AIE.

[12]  Alexander Zelinsky,et al.  Intuitive Human-Robot Interaction Through Active 3D Gaze Tracking , 2003, ISRR.

[13]  Marc Carreras Pérez A proposal of a behavior-based control architecture with reinforcement learning for an autonomous underwater robot , 2003 .

[14]  Yoshinori Kuno,et al.  Two-way eye contact between humans and robots , 2004, ICMI '04.

[15]  Yasser F. O. Mohammad,et al.  The H3R Explanation Corpus human-human and base human-robot interaction dataset , 2008, 2008 International Conference on Intelligent Sensors, Sensor Networks and Information Processing.

[16]  Candace L. Sidner,et al.  Where to look: a study of human-robot engagement , 2004, IUI '04.

[17]  Maxime Crochemore,et al.  Algorithms on strings , 2007 .

[18]  Curtis Collins,et al.  Motion Planning for Redundant Manipulators Using a Floating Point Genetic Algorithm , 2003, J. Intell. Robotic Syst..