Deep learning investigation for chess player attention prediction using eye-tracking and game data

This article reports on an investigation of the use of convolutional neural networks to predict the visual attention of chess players. The visual attention model described in this article has been created to generate saliency maps that capture hierarchical and spatial features of chessboard, in order to predict the probability fixation for individual pixels Using a skip-layer architecture of an autoencoder, with a unified decoder, we are able to use multiscale features to predict saliency of part of the board at different scales, showing multiple relations between pieces. We have used scan path and fixation data from players engaged in solving chess problems, to compute 6600 saliency maps associated to the corresponding chess piece configurations. This corpus is completed with synthetically generated data from actual games gathered from an online chess platform. Experiments realized using both scan-paths from chess players and the CAT2000 saliency dataset of natural images, highlights several results. Deep features, pretrained on natural images, were found to be helpful in training visual attention prediction for chess. The proposed neural network architecture is able to generate meaningful saliency maps on unseen chess configurations with good scores on standard metrics. This work provides a baseline for future work on visual attention prediction in similar contexts.

[1]  R. A. Leibler,et al.  On Information and Sufficiency , 1951 .

[2]  M. Posner,et al.  The attention system of the human brain. , 1990, Annual review of neuroscience.

[3]  N. Charness,et al.  The perceptual aspect of skilled performance in chess: Evidence from eye movements , 2001, Memory & cognition.

[4]  Asha Iyer,et al.  Components of bottom-up gaze allocation in natural images , 2005, Vision Research.

[5]  N. Charness,et al.  Perception in chess: Evidence from eye movements , 2006 .

[6]  Tom Fawcett,et al.  An introduction to ROC analysis , 2006, Pattern Recognit. Lett..

[7]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[8]  Frédo Durand,et al.  Learning to predict where humans look , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[9]  Frédo Durand,et al.  A Benchmark of Computational Models of Saliency to Predict Human Fixations , 2012 .

[10]  Muhammed Abdullah Bülbül Visual attention models and applications to 3D computer graphics , 2012 .

[11]  Ali Borji,et al.  Analysis of Scores, Datasets, and Models in Visual Saliency Prediction , 2013, 2013 IEEE International Conference on Computer Vision.

[12]  Zhen Yang,et al.  Bio-inspired Visual Attention Model and Saliency Guided Object Segmentation , 2013, ICGEC.

[13]  Thierry Baccino,et al.  Methods for comparing scanpaths and saliency maps: strengths and weaknesses , 2013, Behavior research methods.

[14]  Jenny Benois-Pineau,et al.  Saliency-based object recognition in video , 2013 .

[15]  Jenny Benois-Pineau,et al.  Geometrical cues in visual saliency models for active object recognition in egocentric videos , 2014, PIVP '14.

[16]  Qi Zhao,et al.  SALICON: Reducing the Semantic Gap in Saliency Prediction by Adapting Deep Neural Networks , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[17]  Ali Borji,et al.  CAT2000: A Large Scale Fixation Dataset for Boosting Saliency Research , 2015, ArXiv.

[18]  Jiaojiao Zhu,et al.  Visual Saliency Detection Based Object Recognition , 2015, J. Inf. Hiding Multim. Signal Process..

[19]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[20]  Daria Stefic Learning saliency for human action recognition , 2016 .

[21]  R. Venkatesh Babu,et al.  DeepFix: A Fully Convolutional Neural Network for Predicting Human Eye Fixations , 2015, IEEE Transactions on Image Processing.

[22]  Leslie N. Smith,et al.  Cyclical Learning Rates for Training Neural Networks , 2015, 2017 IEEE Winter Conference on Applications of Computer Vision (WACV).

[23]  Demis Hassabis,et al.  Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm , 2017, ArXiv.

[24]  Hassan Foroosh,et al.  Single Image Action Recognition by Predicting Space-Time Saliency , 2017, ArXiv.

[25]  Xu Yang,et al.  A novel approach for visual Saliency detection and segmentation based on objectness and top-down attention , 2017 .

[26]  Heather Sheridan,et al.  Chess players' eye movements reveal rapid recognition of complex visual patterns: Evidence from a chess-related visual search task. , 2017, Journal of vision.

[27]  James L. Crowley,et al.  Multimodal Observation and Classification of People Engaged in Problem Solving: Application to Chess Players , 2018, Multimodal Technol. Interact..

[28]  Wenguan Wang,et al.  Deep Visual Attention Prediction , 2017, IEEE Transactions on Image Processing.

[29]  Eakta Jain,et al.  Deepcomics: saliency estimation for comics , 2018, ETRA.

[30]  Eakta Jain,et al.  Deepcomics , 2018, Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications.

[31]  Mohamed-Chaker Larabi,et al.  Visual Attention for Rendered 3D Shapes , 2018, Comput. Graph. Forum.

[32]  Frédo Durand,et al.  What Do Different Evaluation Metrics Tell Us About Saliency Models? , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.