Towards learning through robotic interaction alone: the joint guided search task

This work proposes a biologically inspired approach that focuses on attention systems that are able to inhibit or constrain what is relevant at any one moment. We propose a radically new approach to making progress in human-robot joint attention called "the joint guided search task". Visual guided search is the activity of the eye as it saccades from position to position recognizing objects in each fixation location until the target object is found. Our research focuses on the exchange of nonverbal behavior toward changing the fixation location while also performing object recognition. Our main goal is a very ambitious goal of sharing attention through probing synthetic foreground maps (i.e. what is being considered by the robotic agent) and the biological attention system of the human.

[1]  Nitish Srivastava,et al.  Learning Generative Models with Visual Attention , 2013, NIPS.

[2]  Minoru Asada,et al.  A constructive model for the development of joint attention , 2003, Connect. Sci..

[3]  Brian Scassellati,et al.  Foundations for a theory of mind for a humanoid robot , 2001 .

[4]  A. Treisman,et al.  A feature-integration theory of attention , 1980, Cognitive Psychology.

[5]  Katharina J. Rohlfing,et al.  Computational Analysis of Motionese Toward Scaffolding Robot Action Learning , 2009, IEEE Transactions on Autonomous Mental Development.

[6]  Bill Triggs,et al.  Histograms of oriented gradients for human detection , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[7]  C. Teuscher,et al.  Gaze following: why (not) learn it? , 2006, Developmental science.

[8]  Brian Scassellati,et al.  Active Learning of Joint Attention , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[9]  Cynthia Breazeal,et al.  Working with robots and objects: revisiting deictic reference for achieving spatial common ground , 2006, HRI '06.

[10]  F. Kaplan,et al.  The challenges of joint attention , 2006 .

[11]  V. Braitenberg Vehicles, Experiments in Synthetic Psychology , 1984 .

[12]  Katharina J. Rohlfing,et al.  Attention via Synchrony: Making Use of Multimodal Cues in Social Learning , 2009, IEEE Transactions on Autonomous Mental Development.

[13]  Allison Sauppé,et al.  Robot Deictics: How Gesture and Context Shape Referential Communication , 2014, 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[14]  John K. Tsotsos A Computational Perspective on Visual Attention , 2011 .

[15]  Simone Frintrop,et al.  VOCUS: A Visual Attention System for Object Detection and Goal-Directed Search , 2006, Lecture Notes in Computer Science.

[16]  J. Wolfe,et al.  Guided Search 2.0 A revised model of visual search , 1994, Psychonomic bulletin & review.