Exploring How Saliency Affects Attention in Virtual Reality

We investigate how changes in the saliency of a Virtual Environment (VE) affect our visual attention during different tasks. In particular, we investigate if users are attracted to the most salient regions in the VE. This knowledge will help researchers design optimal VR environments, purposefully direct the attention of users, and avoid unintentional distractions. We conducted a user study (N=30) where participants performed tasks (video watching, object stacking, visual search, waiting) with two different saliency conditions in the virtual environment. Our findings suggest that while participants notice the differences in saliency, their visual attention is not diverted towards the salient regions when they are performing tasks.

[1]  R. D. Gordon,et al.  Executive control of visual attention in dual-task situations. , 2001, Psychological review.

[2]  Anthony Steed,et al.  The impact of a self-avatar on cognitive load in immersive virtual reality , 2016, 2016 IEEE Virtual Reality (VR).

[3]  C. Koch,et al.  A saliency-based search mechanism for overt and covert shifts of visual attention , 2000, Vision Research.

[4]  Mankyu Sung,et al.  Selective Anti-Aliasing for Virtual Reality Based on Saliency Map , 2017, 2017 International Symposium on Ubiquitous Virtual Reality (ISUVR).

[5]  Cagri Ozcinar,et al.  Look around you: Saliency maps for omnidirectional images in VR applications , 2017, 2017 Ninth International Conference on Quality of Multimedia Experience (QoMEX).

[6]  Christoph Maggioni,et al.  A novel gestural input device for virtual reality , 1993, Proceedings of IEEE Virtual Reality Annual International Symposium.

[7]  Costas Boletsis,et al.  The New Era of Virtual Reality Locomotion: A Systematic Literature Review of Techniques and a Proposed Typology , 2017, Multimodal Technol. Interact..

[8]  Erhardt Barth,et al.  Learned saliency transformations for gaze guidance , 2011, Electronic Imaging.

[9]  Albrecht Schmidt,et al.  Remote VR Studies: A Framework for Running Virtual Reality Studies Remotely Via Participant-Owned HMDs , 2021, ACM Trans. Comput. Hum. Interact..

[10]  Dirk Helbing,et al.  Crowd behaviour during high-stress evacuations in an immersive virtual environment , 2016, Journal of The Royal Society Interface.

[11]  Dominic Gorecky,et al.  Serious Games and Virtual Simulator for Automotive Manufacturing Education & Training , 2015 .

[12]  Weisi Lin,et al.  Saliency detection for stereoscopic images , 2013, 2013 Visual Communications and Image Processing (VCIP).

[13]  Robert B. Fisher,et al.  Object-based visual attention for computer vision , 2003, Artif. Intell..

[14]  Norman G. Vinson,et al.  Design guidelines for landmarks to support navigation in virtual environments , 1999, CHI '99.

[15]  Jean Underwood,et al.  Visual Attention, Visual Saliency, and Eye Movements During the Inspection of Natural Scenes , 2005, IWINAC.

[16]  Anthony Steed,et al.  A saliency-based method of simulating visual attention in virtual scenes , 2009, VRST '09.

[17]  Shenmin Zhang,et al.  What do saliency models predict? , 2014, Journal of vision.

[18]  Ufuk Celikcan,et al.  Visual Saliency Prediction in Dynamic Virtual Reality Environments Experienced with Head-Mounted Displays: An Exploratory Study , 2019, 2019 International Conference on Cyberworlds (CW).

[19]  Patrick Le Callet,et al.  Overt visual attention for free-viewing and quality assessment tasks Impact of the regions of interest on a video quality metric , 2010 .

[20]  Henrik M. Peperkorn,et al.  The impact of perception and presence on emotional reactions: a review of research in virtual reality , 2015, Front. Psychol..

[21]  Christof Koch,et al.  Modeling attention to salient proto-objects , 2006, Neural Networks.

[22]  Florian Alt,et al.  Memorability of cued-recall graphical passwords with saliency masks , 2016, MUM.

[23]  Mohamed Khamis,et al.  Introduction and establishment of virtual training in the factory of the future , 2017, Int. J. Comput. Integr. Manuf..

[24]  Heinrich Hußmann,et al.  Guiding the Viewer in Cinematic Virtual Reality by Diegetic Cues , 2018, AVR.

[25]  Wayne L. Shebilske,et al.  A preliminary empirical evaluation of virtual reality as an instructional medium for visual-spatial tasks , 1993 .

[26]  Albrecht Schmidt,et al.  Increasing the security of gaze-based cued-recall graphical passwords using saliency masks , 2012, CHI.

[27]  Wayne L. Shebilske,et al.  Virtual Reality: An Instructional Medium for Visual-Spatial Tasks. , 1992 .

[28]  Albrecht Schmidt,et al.  Virtual Field Studies: Conducting Studies on Public Displays in Virtual Reality , 2020, CHI.

[29]  Guozhong Dai,et al.  A survey on human-computer interaction in virtual reality , 2016 .

[30]  Frédo Durand,et al.  Learning to predict where humans look , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[31]  Steven K. Feiner,et al.  Directing attention and influencing memory with visual saliency modulation , 2011, CHI.

[32]  Ann McNamara,et al.  Subtle gaze direction , 2009, TOGS.