ObserVAR: Visualization System for Observing Virtual Reality Users using Augmented Reality

While virtual reality (VR) tools provide an immersive learning experience for students, it is difficult for an instructor to observe the students' learning activities in a virtual environment (VE). Thus, it hinders interactions that could occur between the instructor and students, which are usually required in a classroom environment to understand how each student learns. Previous work has added virtual awareness cues that can help a small group of students to collaborate in a VE. However, when the number of students increases, such virtual awareness cues can cause visual clutter and confuse the instructor. We propose ObserVAR, a visualization system that allows the instructor to observe students in a VE at scale. ObserVAR uses augmented reality techniques to visualize each student's gaze in a VE and improves the instructor's awareness of the entire class. The visualizations are then optimized to reduce visual clutter in the scene using a force-directed graph drawing algorithm. In designing ObserVAR, we first investigated visualizations that can provide the instructor with an overall awareness of the VE that can be scaled up as the number of users increases. Second, we optimized the visualization of students by leveraging a graph drawing algorithm to reduce the visual clutter in the class scene. We compared the performance of our prototype with some commercially available user interfaces for VE classrooms. In our study, ObserVAR has demonstrated improvement and flexibility in several application scenarios.

[1]  Christian Sandor,et al.  Improving Spatial Perception for Augmented Reality X-Ray Vision , 2009, 2009 IEEE Virtual Reality Conference.

[2]  Dieter Schmalstieg,et al.  Drone-Augmented Human Vision: Exocentric Control for Drones Exploring Hidden Areas , 2018, IEEE Transactions on Visualization and Computer Graphics.

[3]  Zhigeng Pan,et al.  A Review on Augmented Reality for Virtual Heritage System , 2009, Edutainment.

[4]  Randy F. Pausch,et al.  Voodoo dolls: seamless interaction at multiple scales in virtual environments , 1999, SI3D.

[5]  Mark Billinghurst,et al.  Evaluation of mixed-space collaboration , 2005, Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05).

[6]  Keita Higuchi,et al.  Browsing Group First-Person Videos with 3D Visualization , 2018, ISS.

[7]  Kangsoo Kim,et al.  Revisiting Trends in Augmented Reality Research: A Review of the 2nd Decade of ISMAR (2008–2017) , 2018, IEEE Transactions on Visualization and Computer Graphics.

[8]  Jun Rekimoto,et al.  JackIn: integrating first-person view with out-of-body vision generation for human-human augmentation , 2014, AH.

[9]  Mitsunori Tada,et al.  Dollhouse VR: a multi-view, multi-user collaborative design workspace with VR technology , 2015, SIGGRAPH Asia Posters.

[10]  Peter Eades,et al.  Journal of Graph Algorithms and Applications Navigating Clustered Graphs Using Force-directed Methods , 2022 .

[11]  Anselm Grundhöfer,et al.  Recent Advances in Projection Mapping Algorithms, Hardware and Applications , 2018, Comput. Graph. Forum.

[12]  Robert F. Cohen,et al.  Validating Graph Drawing Aesthetics , 1995, GD.

[13]  F. Biocca,et al.  Internal Consistency and Reliability of the Networked MindsMeasure of Social Presence , 2004 .

[14]  Riku Suomela,et al.  Accessing Context in Wearable Computers , 2002, Personal and Ubiquitous Computing.

[15]  Matthew Chalmers,et al.  Lessons from the lighthouse: collaboration in a shared mixed reality system , 2003, CHI '03.

[16]  Ting-Chuen Pong,et al.  Efficient Information Sharing Techniques between Workers of Heterogeneous Tasks in 3D CVE , 2018, Proc. ACM Hum. Comput. Interact..

[17]  Robert W. Lindeman,et al.  Mini-Me: An Adaptive Avatar for Mixed Reality Remote Collaboration , 2018, CHI.

[18]  Arindam Dey,et al.  The Effects of Sharing Awareness Cues in Collaborative Mixed Reality , 2019, Front. Robot. AI.

[19]  Rajiv V. Dubey,et al.  Point & Teleport Locomotion Technique for Virtual Reality , 2016, CHI PLAY.

[20]  Lora Oehlberg,et al.  A Visual Interaction Cue Framework from Video Game Environments for Augmented Reality , 2018, CHI.

[21]  Ben Shneiderman,et al.  The eyes have it: a task by data type taxonomy for information visualizations , 1996, Proceedings 1996 IEEE Symposium on Visual Languages.

[22]  Randy Pausch,et al.  Virtual reality on a WIM: interactive worlds in miniature , 1995, CHI '95.

[23]  Liwei Chan,et al.  ShareSpace: Facilitating Shared Use of the Physical Space by both VR Head-Mounted Display and External Users , 2018, UIST.

[24]  Edward M. Reingold,et al.  Graph drawing by force‐directed placement , 1991, Softw. Pract. Exp..

[25]  Stephen G. Kobourov,et al.  Force-Directed Drawing Algorithms , 2013, Handbook of Graph Drawing and Visualization.

[26]  Dieter Schmalstieg,et al.  Scalable Techniques for Collaborative Outdoor Augmented Reality , 2004 .

[27]  Steven K. Feiner,et al.  Information filtering for mobile augmented reality , 2000, Proceedings IEEE and ACM International Symposium on Augmented Reality (ISAR 2000).

[28]  Julian Frommel,et al.  ShareVR: Enabling Co-Located Experiences for Virtual Reality between HMD and Non-HMD Users , 2017, CHI.

[29]  Tobias Höllerer,et al.  Interactive tools for virtual x-ray vision in mobile augmented reality , 2004, Third IEEE and ACM International Symposium on Mixed and Augmented Reality.

[30]  Joseph S. Dumas,et al.  Comparison of three one-question, post-task usability questionnaires , 2009, CHI.

[31]  Holger Regenbrecht,et al.  Towards Pervasive Augmented Reality: Context-Awareness in Augmented Reality , 2017, IEEE Transactions on Visualization and Computer Graphics.