OpenEDS2020: Open Eyes Dataset

We present the second edition of OpenEDS dataset, OpenEDS2020, a novel dataset of eye-image sequences captured at a frame rate of 100 Hz under controlled illumination, using a virtual-reality head-mounted display mounted with two synchronized eye-facing cameras. The dataset, which is anonymized to remove any personally identifiable information on participants, consists of 80 participants of varied appearance performing several gaze-elicited tasks, and is divided in two subsets: 1) Gaze Prediction Dataset, with up to 66,560 sequences containing 550,400 eye-images and respective gaze vectors, created to foster research in spatio-temporal gaze estimation and prediction approaches; and 2) Eye Segmentation Dataset, consisting of 200 sequences sampled at 5 Hz, with up to 29,500 images, of which 5% contain a semantic segmentation label, devised to encourage the use of temporal information to propagate labels to contiguous frames. Baseline experiments have been evaluated on OpenEDS2020, one for each task, with average angular error of 5.37 degrees when performing gaze prediction on 1 to 5 frames into the future, and a mean intersection over union score of 84.1% for semantic segmentation. As its predecessor, OpenEDS dataset, we anticipate that this new dataset will continue creating opportunities to researchers in eye tracking, machine learning and computer vision communities, to advance the state of the art for virtual reality applications. The dataset is available for download upon request at this http URL.

[1]  Qiang Ji,et al.  Real Time Eye Gaze Tracking with 3D Deformable Eye-Face Model , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[2]  Manuel Fernandez,et al.  Augmented-Virtual Reality: How to Improve Education Systems , 2017 .

[3]  Bruce H. Thomas,et al.  A survey of visual, mixed, and augmented reality gaming , 2012, CIE.

[4]  Otmar Hilliges,et al.  Deep Pictorial Gaze Estimation , 2018, ECCV.

[5]  Seyed-Ahmad Ahmadi,et al.  DeepVOG: Open-source pupil segmentation and gaze estimation in neuroscience using deep learning , 2019, Journal of Neuroscience Methods.

[6]  Francisco J. García-Peñalvo,et al.  Virtual Reality as an Educational and Training Tool for Medicine , 2018, Journal of Medical Systems.

[7]  Peter Robinson,et al.  A 3D Morphable Eye Region Model for Gaze Estimation , 2016, ECCV.

[8]  Mark Billinghurst,et al.  Improving co-presence with augmented visual communication cues for sharing experience through video conference , 2014, 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).

[9]  Moshe Eizenman,et al.  General theory of remote gaze estimation using the pupil center and corneal reflections , 2006, IEEE Transactions on Biomedical Engineering.

[10]  David Fitzpatrick,et al.  Types of Eye Movements and Their Functions , 2001 .

[11]  Meia Chita-Tegmark,et al.  Social attention in ASD: A review and meta-analysis of eye-tracking studies. , 2016, Research in developmental disabilities.

[12]  Roberto Cipolla,et al.  SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[13]  G. O'driscoll,et al.  Smooth pursuit in schizophrenia: A meta-analytic review of research since 1993 , 2008, Brain and Cognition.

[14]  Qiang Ji,et al.  Neuro-Inspired Eye Tracking With Eye Movement Dynamics , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  Joohwan Kim,et al.  Latency Requirements for Foveated Rendering in Virtual Reality , 2017, ACM Trans. Appl. Percept..

[16]  Song-Chun Zhu,et al.  Understanding Human Gaze Communication by Spatio-Temporal Graph Reasoning , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[17]  Mario Fritz,et al.  MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  T. Hartmann,et al.  Entertainment in Virtual Reality and Beyond: The Influence of Embodiment, Co-Location, and Cognitive Distancing on Users’ Entertainment Experience , 2020 .

[19]  Qiang Ji,et al.  In the Eye of the Beholder: A Survey of Models for Eyes and Gaze , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[20]  K. J. Miller,et al.  Effectiveness and feasibility of virtual reality and gaming system use at home by older adults for enabling physical activity to improve health-related domains: a systematic review. , 2014, Age and ageing.

[21]  Joohwan Kim,et al.  Towards foveated rendering for gaze-tracked virtual reality , 2016, ACM Trans. Graph..

[22]  Oleg V. Komogortsev,et al.  Benefits of temporal information for appearance-based gaze estimation , 2020, ETRA Short Papers.

[23]  Paul Coulton,et al.  Exploring the Evolution of Mobile Augmented Reality for Future Entertainment Systems , 2015, CIE.

[24]  Bing Pan,et al.  The determinants of web page viewing behavior: an eye-tracking study , 2004, ETRA.

[25]  Lan Li,et al.  Application of virtual reality technology in clinical medicine. , 2017, American journal of translational research.

[26]  Michael Neff,et al.  Communication Behavior in Embodied Virtual Reality , 2018, CHI.

[27]  Sergio Escalera,et al.  Recurrent CNN for 3D Gaze Estimation using Appearance and Shape Cues , 2018, BMVC.