A Visual Attentive Model for Discovering Patterns in Eye-Tracking Data—A Proposal in Cultural Heritage

In the Cultural Heritage (CH) context, art galleries and museums employ technology devices to enhance and personalise the museum visit experience. However, the most challenging aspect is to determine what the visitor is interested in. In this work, a novel Visual Attentive Model (VAM) has been proposed that is learned from eye tracking data. In particular, eye-tracking data of adults and children observing five paintings with similar characteristics have been collected. The images are selected by CH experts and are—the three “Ideal Cities” (Urbino, Baltimore and Berlin), the Inlaid chest in the National Gallery of Marche and Wooden panel in the “Studiolo del Duca” with Marche view. These pictures have been recognized by experts as having analogous features thus providing coherent visual stimuli. Our proposed method combines a new coordinates representation from eye sequences by using Geometric Algebra with a deep learning model for automated recognition (to identify, differentiate, or authenticate individuals) of people by the attention focus of distinctive eye movement patterns. The experiments were conducted by comparing five Deep Convolutional Neural Networks (DCNNs), yield high accuracy (more than 80%), demonstrating the effectiveness and suitability of the proposed approach in identifying adults and children as museums’ visitors.

[1]  Filippo Camerota,et al.  La prospettiva del Rinascimento : arte, architettura, scienza , 2006 .

[2]  Fred Paas,et al.  Studying Eye Movements in Multimedia Learning , 2008 .

[3]  L. Mainetti,et al.  An Indoor Location-Aware System for an IoT-Based Smart Museum , 2016, IEEE Internet of Things Journal.

[4]  Linden J. Ball,et al.  Eye tracking in HCI and usability research. , 2006 .

[5]  Roberto Pierdicca,et al.  Automatic Analysis of Eye-Tracking Data for Augmented Reality Applications: A Prospective Outlook , 2016, AVR.

[6]  Bernd Ludwig,et al.  Where is the Landmark? Eye Tracking Studies in Large-Scale Indoor Environments , 2014, ET4S@GIScience.

[7]  Nicu Sebe,et al.  In the eye of the beholder: employing statistical analysis and eye tracking for analyzing abstract paintings , 2012, ACM Multimedia.

[8]  Jorge Henrique Caldeira de Oliveira,et al.  Eye Tracking in Neuromarketing: A Research Agenda for Marketing Studies , 2015 .

[9]  Henrik I. Christensen,et al.  Computational visual attention systems and their cognitive foundations: A survey , 2010, TAP.

[10]  Ramona Quattrini,et al.  Augmented Reality Experience: From High-Resolution Acquisition to Real Time Augmented Contents , 2014, Adv. Multim..

[11]  Qi Zhao,et al.  Learning saliency-based visual attention: A review , 2013, Signal Process..

[12]  Kara M. Dawson,et al.  Does visual attention to the instructor in online video affect learning and learner perceptions? An eye-tracking analysis , 2020, Comput. Educ..

[13]  R. Quiroga,et al.  How Do We See Art: An Eye-Tracker Study , 2011, Front. Hum. Neurosci..

[14]  Elke E. Mattheiss,et al.  Attentional Behavior of Users on the Move Towards Pervasive Advertising Media , 2011, Pervasive Advertising.

[15]  Qianli Xu,et al.  Eye-2-I: Eye-tracking for just-in-time implicit user profiling , 2015, 2017 IEEE 2nd International Conference on Signal and Image Processing (ICSIP).

[16]  Christoph Schlieder,et al.  Starting to get bored: an outdoor eye tracking study of tourists exploring a city panorama , 2014, ETRA.

[17]  Roberto Pierdicca,et al.  IoT and Engagement in the Ubiquitous Museum , 2019, Sensors.

[18]  Rebecca Schnall,et al.  Eye-tracking retrospective think-aloud as a novel approach for a usability evaluation , 2019, Int. J. Medical Informatics.

[19]  Manfred Tscheligi,et al.  Attention approximation of mobile users towards their environment , 2014, CHI Extended Abstracts.

[20]  Stephan Schwan,et al.  Use of digital guides in museum galleries: Determinants of information selection , 2016, Comput. Hum. Behav..

[21]  V. Gallese,et al.  When Art Moves the Eyes: A Behavioral and Eye-Tracking Study , 2012, PloS one.

[22]  Rita Cucchiara,et al.  Predicting Human Eye Fixations via an LSTM-Based Saliency Attentive Model , 2016, IEEE Transactions on Image Processing.

[23]  Toon Goedemé,et al.  Towards a more effective method for analyzing mobile eye-tracking data: integrating gaze data with object recognition algorithms , 2011, PETMEI '11.

[24]  Truyen Tran,et al.  Predicting healthcare trajectories from medical records: A deep learning approach , 2017, J. Biomed. Informatics.

[25]  Paolo Clini,et al.  HeritageGO (HeGO): A Social Media Based Project for Cultural Heritage Valorization , 2019, UMAP.

[26]  C. Redies,et al.  Statistical regularities in art: Relations with visual coding and perception , 2010, Vision Research.

[27]  Peter Guttorp,et al.  What we look at in paintings: A comparison between experienced and inexperienced art viewers , 2016, 1603.01066.

[28]  Pascal Vincent,et al.  The Difficulty of Training Deep Architectures and the Effect of Unsupervised Pre-Training , 2009, AISTATS.

[29]  Thies Pfeiffer,et al.  EyeSee3D: a low-cost approach for analysing mobile 3D eye tracking data using augmented reality technology , 2014 .

[30]  Stijn De Beugher,et al.  Automatic analysis of in-the-wild mobile eye-tracking experiments using object, face and person detection , 2014, 2014 International Conference on Computer Vision Theory and Applications (VISAPP).

[31]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[32]  Hong Liu,et al.  Enhanced skeleton visualization for view invariant human action recognition , 2017, Pattern Recognit..

[33]  Magdalena Ewa Król,et al.  A novel machine learning analysis of eye-tracking data reveals suboptimal visual information extraction from facial stimuli in individuals with autism , 2019, Neuropsychologia.

[34]  James Zijun Wang,et al.  Markov chain based computational visual attention model that learns from eye tracking data , 2014, Pattern Recognit. Lett..

[35]  Yukiko I. Nakano,et al.  Estimating user's engagement from eye-gaze behaviors in human-agent conversations , 2010, IUI '10.

[36]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[37]  Jakob Nielsen,et al.  Eyetracking Web Usability , 2009 .

[38]  Dumitru Erhan,et al.  Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[39]  Jia Deng,et al.  A large-scale hierarchical image database , 2009, CVPR 2009.

[40]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[41]  M. F. Trombetta,et al.  Valorization of Foods: From Tradition to Innovation , 2020 .

[42]  Silvia Wen-Yu Lee,et al.  A review of using eye-tracking technology in exploring learning from 2000 to 2012 , 2013 .

[43]  Thomas Berger,et al.  Using eye-tracking to for analyzing case study materials , 2019, The International Journal of Management Education.

[44]  Roberto Pierdicca,et al.  User-Centered Predictive Model for Improving Cultural Heritage Augmented Reality Applications: An HMM-Based Approach for Eye-Tracking Data , 2018, J. Imaging.

[45]  Sergey Ioffe,et al.  Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.