Computer Vision for Learning to Interact Socially with Humans

Computer vision is essential to develop a social robotic system capable to interact with humans. It is responsible to extract and represent the information around the robot. Furthermore, a learning mechanism, to select correctly an action to be executed in the environment, pro-active mechanism, to engage in an interaction, and voice mechanism, are indispensable to develop a social robot. All these mechanisms together provide a robot emulate some human behavior, like shared attention. Then, this chapter presents a robotic architecture that is composed with such mechanisms to make possible interactions between a robotic head with a caregiver, through of the shared attention learning with identification of some objects. DOI: 10.4018/978-1-4666-3994-2.ch059

[1]  John K. Tsotsos,et al.  Modeling Visual Attention via Selective Tuning , 1995, Artif. Intell..

[2]  Minoru Asada,et al.  A constructive model for the development of joint attention , 2003, Connect. Sci..

[3]  B. Scassellati Imitation and mechanisms of joint attention: a developmental structure for building social skills on a humanoid robot , 1999 .

[4]  Matthew W. Hoffman,et al.  Probabilistic Gaze Imitation and Saliency Learning in a Robotic Head , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[5]  C. Teuscher,et al.  Gaze following: why (not) learn it? , 2006, Developmental science.

[6]  Minoru Asada,et al.  Reproducing Interaction Contingency Toward Open-Ended Development of Social Actions: Case Study on Joint Attention , 2010, IEEE Transactions on Autonomous Mental Development.

[7]  Aude Billard,et al.  A survey of Tactile Human-Robot Interactions , 2010, Robotics Auton. Syst..

[8]  Matthias Scheutz,et al.  Investigating multimodal real-time patterns of joint attention in an hri word learning task , 2010, HRI 2010.

[9]  Yukie Nagai,et al.  The Role of Motion Information in Learning Human-Robot Joint Attention , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[10]  Matthew W. Crocker,et al.  Visual attention in spoken human-robot interaction , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[11]  Jun Zhou,et al.  Computer Vision and Pattern Recognition in Environmental Informatics , 2015, CVPR 2015.

[12]  R. Barber,et al.  Maggie: A Robotic Platform for Human-Robot Social Interaction , 2006, 2006 IEEE Conference on Robotics, Automation and Mechatronics.

[13]  Michael A. Goodrich,et al.  Human-Robot Interaction: A Survey , 2008, Found. Trends Hum. Comput. Interact..

[14]  Hendrik Blockeel,et al.  Top-Down Induction of First Order Logical Decision Trees , 1998, AI Commun..

[15]  Martijn van Otterlo,et al.  A survey of reinforcement learning in relational domains , 2005 .

[16]  Martijn van Otterlo Thesis review: Relational Reinforcement Learning / by Kurt Driessens. - Thesis Katholieke Universiteit Leuven , 2004 .

[17]  Nadim Joni Shah,et al.  Minds Made for Sharing: Initiating Joint Attention Recruits Reward-related Neurocircuitry , 2010, Journal of Cognitive Neuroscience.

[18]  F. Kaplan,et al.  The challenges of joint attention , 2006 .

[19]  Roseli A. F. Romero,et al.  Concept Learning By Human Tutelage For Social Robots , 2008 .

[20]  Mona Nafari,et al.  Reversible Data Hiding Based on Statistical Correlation of Blocked Sub-Sampled Image , 2012, Int. J. Comput. Vis. Image Process..

[21]  Dimitrios I. Fotiadis,et al.  Intravascular Imaging: Current Applications and Research Developments , 2011 .

[22]  L. Schreibman,et al.  Joint attention training for children with autism using behavior modification procedures. , 2003, Journal of child psychology and psychiatry, and allied disciplines.

[23]  A. Meltzoff,et al.  The development of gaze following and its relation to language. , 2005, Developmental science.

[24]  Ebroul Izquierdo,et al.  Bio-Inspired Scheme for Classification of Visual Information , 2011 .

[25]  Tae-Sun Choi,et al.  Depth Map and 3D Imaging Applications: Algorithms and Technologies , 2011 .

[26]  Hideaki Kuzuoka,et al.  Precision timing in human-robot interaction: coordination of head movement and utterance , 2008, CHI.

[27]  Roseli A. Francelin Romero,et al.  Modelling Shared Attention Through Relational Reinforcement Learning , 2012, J. Intell. Robotic Syst..

[28]  R. Desimone,et al.  Neural mechanisms of selective visual attention. , 1995, Annual review of neuroscience.

[29]  Rajesh P. N. Rao,et al.  "Social" robots are psychological agents for infants: A test of gaze following , 2010, Neural Networks.

[30]  Masahiro Fujita,et al.  An ethological and emotional basis for human-robot interaction , 2003, Robotics Auton. Syst..

[31]  G. Giannoglou,et al.  Future Trends in Coronary CT Angiography , 2012 .

[32]  Jian Cheng,et al.  Computer Vision for Multimedia Applications: Methods and Solutions , 2010 .

[33]  Samy S. A. Ghoniemy Performance Analysis of Mobile Ad-Hoc Network Protocols Against Black Hole Attacks , 2013, Int. J. Comput. Vis. Image Process..

[34]  Roseli A. Francelin Romero,et al.  Learning of shared attention in sociable robotics , 2009, J. Algorithms.

[35]  J. Triesch,et al.  A robotic model of the development of gaze following , 2008, 2008 7th IEEE International Conference on Development and Learning.

[36]  Bilge Mutlu,et al.  A Storytelling Robot: Modeling and Evaluation of Human-like Gaze Behavior , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[37]  Michael J. Jones,et al.  Top–down learning of low-level vision tasks , 1997, Current Biology.

[38]  Brian Scassellati,et al.  A Context-Dependent Attention System for a Social Robot , 1999, IJCAI.

[39]  Ian R. Fasel,et al.  The emergence of shared attention: Using robots to test developmental theories , 2001 .

[40]  Takayuki Kanda,et al.  Footing in human-robot conversations: How robots might shape participant roles using gaze cues , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[41]  Trevor Darrell,et al.  Pose estimation using 3D view-based eigenspaces , 2003, 2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443).