Intention Recognition and Object Recommendation System using Deep Auto-encoder Based Affordance Model

Intention recognition is an important task for human-agent interactions (HAI) since it can make the robot respond adequately to the human’s intention. For the robot to understand the world in terms of its own actions, the robot requires the definition of adequate knowledge representations. Affordance is the concept used to represent the relation between an agent and its environment. A robot can exploit this type of knowledge to infer implicit human intentions. In this paper, we propose a system based on action-object affordances modeled using deep structure that can recognize the user’s intention and recommend the corresponding objects related to that intention. The network is learnt by the robot after considering the user’s attention for specific objects. To notice the user’s attention, the gaze information is obtained using Tobii 1750 eye-tracker in experiments. The experimental results show the successful recognition and recommendation performance of the proposed system.

[1]  E. Reed The Ecological Approach to Visual Perception , 1989 .

[2]  Robert J. K. Jacob,et al.  What you look at is what you get: eye movement-based interaction techniques , 1990, CHI '90.

[3]  Irina Rish,et al.  An empirical study of the naive Bayes classifier , 2001 .

[4]  Anton Nijholt,et al.  Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes , 2001, CHI.

[5]  Uwe D. Hanebeck,et al.  A generic model for estimating user intentions in human-robot cooperation , 2005, ICINCO.

[6]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[7]  Yee Whye Teh,et al.  A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.

[8]  Jeffrey S. Shell,et al.  Designing for augmented attention: Towards a framework for attentive user interfaces , 2006, Comput. Hum. Behav..

[9]  Manuel Lopes,et al.  Learning Object Affordances: From Sensory--Motor Coordination to Imitation , 2008, IEEE Transactions on Robotics.

[10]  M. Tomasello,et al.  Does the chimpanzee have a theory of mind? 30 years later , 2008, Trends in Cognitive Sciences.

[11]  Qi Cheng,et al.  Human intention recognition in Smart Assisted Living Systems using a Hierarchical Hidden Markov Model , 2008, 2008 IEEE International Conference on Automation Science and Engineering.

[12]  Minho Lee,et al.  Recognition of Human's Implicit Intention Based on an Eyeball Movement Pattern Analysis , 2011, ICONIP.

[13]  Danica Kragic,et al.  Visual object-action recognition: Inferring object affordances from human demonstration , 2011, Comput. Vis. Image Underst..

[14]  Monica N. Nicolescu,et al.  Deep networks for predicting human intent with respect to objects , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[15]  Minho Lee,et al.  Probabilistic human intention modeling for cognitive augmentation , 2012, 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC).

[16]  Minho Lee,et al.  Human implicit intent transition detection based on pupillary analysis , 2012, The 2012 International Joint Conference on Neural Networks (IJCNN).