Probing of human implicit intent based on eye movement and pupillary analysis for augmented cognition

The ultimate purpose of augmented cognition is to enhance human cognitive abilities, which are intrinsically limited. To enhance limited human cognitive abilities, we developed a human augmented cognition system that can offer appropriate information or services by actively responding to the user's intention. This article mainly describes a framework for probing human implicit intentions for the purpose of augmented cognition. The type of user intention, either task‐free human implicit intention or task‐oriented human implicit intention, can be predicted based on fixation count, fixation length, and pupil size variation induced by eye response. Further, these features are used to detect the transition point between task‐free human implicit intention and task‐oriented human implicit intention. Maximum a Posteriori in Naïve Bayes classification model is used for selecting relevant query keywords to search and retrieve specific information from a personalized knowledge database. The experimental results show that the proposed human intention recognition and probing models are suitable for achieving the goal of augmented cognition. © 2013 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 23, 114–126, 2013

[1]  U. Schwarz,et al.  Cognitive eyes , 2004 .

[2]  Wayne D. Gray,et al.  Be Wary of What Your Computer Reads: The Effects of Corpus Selection on Measuring Semantic Relatedness , 2007 .

[3]  Susana T. L. Chung,et al.  The dependence of crowding on flanker complexity and target-flanker similarity. , 2011, Journal of vision.

[4]  Hermann Ebbinghaus,et al.  Memory: a contribution to experimental psychology. , 1987, Annals of neurosciences.

[5]  Minho Lee,et al.  Human implicit intent transition detection based on pupillary analysis , 2012, The 2012 International Joint Conference on Neural Networks (IJCNN).

[6]  Soo-Young Lee,et al.  Understanding Human Implicit Intention based on EEG and Speech Signals , 2011 .

[7]  Nicu Sebe,et al.  Multimodal Human Computer Interaction: A Survey , 2005, ICCV-HCI.

[8]  Karim A. Tahboub,et al.  Journal of Intelligent and Robotic Systems (2005) DOI: 10.1007/s10846-005-9018-0 Intelligent Human–Machine Interaction Based on Dynamic Bayesian Networks Probabilistic Intention Recognition , 2004 .

[9]  Amanda Spink,et al.  Determining the informational, navigational, and transactional intent of Web queries , 2008, Inf. Process. Manag..

[10]  Monica N. Nicolescu,et al.  Understanding human intentions via Hidden Markov Models in autonomous mobile robots , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[11]  Kenneth Holmqvist,et al.  Eye tracking: a comprehensive guide to methods and measures , 2011 .

[12]  Toru Yamaguchi,et al.  A human motion recognition using data mining for a service robot , 2011, 2011 15th International Conference on Advanced Robotics (ICAR).

[13]  M. Just,et al.  Eye fixations and cognitive processes , 1976, Cognitive Psychology.

[14]  Hsinchun Chen,et al.  Browsing in hypertext: a cognitive study , 1992, IEEE Trans. Syst. Man Cybern..

[15]  Daniel E. Rose,et al.  Understanding user goals in web search , 2004, WWW '04.

[16]  Graeme Hirst,et al.  Evaluating WordNet-based Measures of Lexical Semantic Relatedness , 2006, CL.

[17]  Kyo Il Chung,et al.  How to provide service using intention and service models , 2012, 2012 14th International Conference on Advanced Communication Technology (ICACT).

[18]  Julie M. Harris,et al.  Characterising patterns of eye movements in natural images and visual scanning , 2008 .

[19]  Minho Lee,et al.  Adaptive object recognition model using incremental feature representation and hierarchical classification , 2012, Neural Networks.

[20]  J. Beatty Task-evoked pupillary responses, processing load, and the structure of processing resources. , 1982, Psychological bulletin.

[21]  Gary Marchionini,et al.  Information Seeking in Electronic Environments , 1995 .

[22]  Kyung-Whan Oh,et al.  Intention Recognition using a Graph Representation , 2007 .

[23]  Daniel J. Simons,et al.  Current Approaches to Change Blindness , 2000 .

[24]  Mircea Nicolescu,et al.  Vision-based hand pose estimation: A review , 2007, Comput. Vis. Image Underst..

[25]  Carlos Hitoshi Morimoto,et al.  Eye gaze tracking techniques for interactive applications , 2005, Comput. Vis. Image Underst..

[26]  S. Sutton,et al.  Pupillary Response at Visual Threshold , 1966, Nature.

[27]  Rafal A. Angryk,et al.  Measuring semantic similarity using wordnet-based context vectors , 2007, 2007 IEEE International Conference on Systems, Man and Cybernetics.

[28]  G. A. Miller THE PSYCHOLOGICAL REVIEW THE MAGICAL NUMBER SEVEN, PLUS OR MINUS TWO: SOME LIMITS ON OUR CAPACITY FOR PROCESSING INFORMATION 1 , 1956 .

[29]  Alexander C. Schütz,et al.  Eye movements and perception: a selective review. , 2011, Journal of vision.

[30]  Minho Lee,et al.  Recognition of Human's Implicit Intention Based on an Eyeball Movement Pattern Analysis , 2011, ICONIP.

[31]  B. Goldwater Psychological significance of pupillary movements. , 1972, Psychological bulletin.

[32]  Claudio M. Privitera,et al.  The pupil dilation response to visual detection , 2008, Electronic Imaging.

[33]  Hermann Ebbinghaus (1885) Memory: A Contribution to Experimental Psychology , 2013, Annals of Neurosciences.

[34]  Shumin Zhai,et al.  Manual and gaze input cascaded (MAGIC) pointing , 1999, CHI '99.

[35]  B. C. Lacey,et al.  Pupillary and cardiac activity during visual attention. , 1973, Psychophysiology.

[36]  Rajesh P. N. Rao,et al.  Bayesian brain : probabilistic approaches to neural coding , 2006 .

[37]  N. Cowan The magical number 4 in short-term memory: A reconsideration of mental storage capacity , 2001, Behavioral and Brain Sciences.