Combining Planning with Gaze for Online Human Intention Recognition

Intention recognition is the process of using behavioural cues to infer an agent's goals or future behaviour. People use many behavioural cues to infer others' intentions, such as deliberative actions, facial expressions, eye gaze, and gestures. In artificial intelligence, two approaches for intention recognition, among others, are gaze-based and model-based intention recognition. Approaches in the former class use gaze to determine which parts of a space a person looks at more often to infer a person's intention. Approaches in the latter use models of possible future behaviour to rate intentions as more likely if they are a better 'fit' to observed actions. In this paper, we propose a novel model of human intention recognition that combines gaze and model-based approaches for online human intention recognition. Gaze data is used to build probability distributions over a set of possible intentions, which are then used as priors in a model-based intention recognition algorithm. In human-behavioural experiments ( n =20) involving a multi-player board game, we found that adding gaze-based priors to model-based intention recognition more accurately determined intentions ( p

[1]  Gerhard Tröster,et al.  Eye Movement Analysis for Activity Recognition Using Electrooculography , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  Hans-Werner Gellersen,et al.  Orbits: Gaze Interaction for Smart Watches using Smooth Pursuit Eye Movements , 2015, UIST.

[3]  Yukiko I. Nakano,et al.  Effectiveness of Gaze-Based Engagement Estimation in Conversational Agents , 2013, Eye Gaze in Intelligent User Interfaces.

[4]  Sven Bertel,et al.  Dynamically adapting an AI game engine based on players' eye movements and strategies , 2014, EICS.

[5]  Eric Horvitz,et al.  A computational architecture for conversation , 1999 .

[6]  C. Pollard,et al.  Center for the Study of Language and Information , 2022 .

[7]  C. Raymond Perrault,et al.  Beyond Question-Answering. , 1981 .

[8]  Milind Tambe,et al.  RESC: An Approach for Real-time, Dynamic Agent Tracking , 1995, IJCAI.

[9]  E. Pacherie The phenomenology of action: A conceptual framework , 2008, Cognition.

[10]  Raymond J. Mooney,et al.  Plan, Activity, and Intent Recognition: Theory and Practice , 2014 .

[11]  Bilge Mutlu,et al.  Using gaze patterns to predict task intent in collaboration , 2015, Front. Psychol..

[12]  M. Tomasello,et al.  Reliance on head versus eyes in the gaze following of great apes and human infants: the cooperative eye hypothesis. , 2007, Journal of human evolution.

[13]  T. Foulsham Eye movements and their functions in everyday tasks , 2015, Eye.

[14]  Gal A. Kaminka,et al.  Online goal recognition through mirroring: humans and agents , 2016 .

[15]  Hector Geffner,et al.  Probabilistic Plan Recognition Using Off-the-Shelf Classical Planners , 2010, AAAI.

[16]  Marcus Carter,et al.  Exploring the Effects of Gaze Awareness on Multiplayer Gameplay , 2016, CHI PLAY.

[17]  Sean Andrist,et al.  Look together: analyzing gaze coordination with epistemic network analysis , 2015, Front. Psychol..

[18]  Frank Vetere,et al.  Looks Can Be Deceiving: Using Gaze Visualisation to Predict and Mislead Opponents in Strategic Gameplay , 2018, CHI.

[19]  Araceli Sanchis,et al.  Towards gaze-controlled platform games , 2011, 2011 IEEE Conference on Computational Intelligence and Games (CIG'11).

[20]  Roman Bednarik,et al.  A Computational Approach for Prediction of Problem-Solving Behavior Using Support Vector Machines and Eye-Tracking Data , 2013, Eye Gaze in Intelligent User Interfaces.

[21]  Marcus Carter,et al.  Remote Gaze and Gesture Tracking on the Microsoft Kinect: Investigating the Role of Feedback , 2015, OZCHI.

[22]  C. Raymond Perrault,et al.  Beyond question-answering(interactive natural language systems) , 1981 .

[23]  Sebastian Sardiña,et al.  Cost-Based Goal Recognition for Path-Planning , 2017, AAMAS.

[24]  Felipe Meneguzzi,et al.  Landmark-Based Heuristics for Goal Recognition , 2017, AAAI.

[25]  Michael E. Bratman,et al.  Intention, Plans, and Practical Reason , 1991 .

[26]  Nate Blaylock,et al.  Statistical Goal Parameter Recognition , 2004, ICAPS.

[27]  Anatole Lécuyer,et al.  Gaze behavior and visual attention model when turning in virtual environments , 2009, VRST '09.

[28]  Kai Kunze,et al.  I know what you are reading: recognition of document types using mobile eye tracking , 2013, ISWC '13.

[29]  Blai Bonet,et al.  A Concise Introduction to Models and Methods for Automated Planning , 2013, A Concise Introduction to Models and Methods for Automated Planning.

[30]  Marcus Carter,et al.  The Emergence of EyePlay: A Survey of Eye Interaction in Games , 2016, CHI PLAY.

[31]  Hiromitsu Kobayashi,et al.  Unique morphology of the human eye and its adaptive meaning: comparative studies on external morphology of the primate eye. , 2001, Journal of human evolution.

[32]  Frank Vetere,et al.  Evaluating Real-Time Gaze Representations to Infer Intentions in Competitive Turn-Based Strategy Games , 2017, CHI PLAY.