The UW Virtual Brain Project: An immersive approach to teaching functional neuroanatomy

Learning functional neuroanatomy requires forming mental representations of 3D structure, but forming such representations from 2D textbook diagrams can be challenging. We address this challenge in the UW Virtual Brain Project by developing 3D narrated diagrams, which are interactive, guided tours through 3D models of perceptual systems. Lessons can be experienced in virtual realty (VR) or on a personal computer monitor (PC). We predicted participants would learn from lessons presented on both VR and PC devices (comparing pre-test/post-test scores), but that VR would be more effective for achieving both content-based learning outcomes (i.e test performance) and experience-based learning outcomes (i.e., reported enjoyment and ease of use). All participants received lessons about the visual system and auditory system, one in VR and one on a PC (order counterbalanced). We assessed content learning using a drawing/labeling task on paper (2D drawing) in Experiment 1 and a Looking Glass autostereoscopic display (3D drawing) in Experiment 2. In both experiments, we found that the UW Virtual Brain Project lessons were effective for teaching functional neuroanatomy, with no difference between devices. However, participants reported VR was more enjoyable and easier to use. We also evaluated the VR lessons in our Classroom Implementation during an undergraduate course on perception. Students reported that the VR lessons helped them make progress on course learning outcomes, especially for learning system pathways. They suggested lessons could be improved by adding more examples and providing more time to explore in VR. Public Significance Statement. We designed and evaluated interactive 3D narrated diagrams to teach functional neuroanatomy. These lessons can be experienced on desktop PCs and in virtual reality (VR), and are helpful for teaching undergraduates about structure and function of perceptual systems in the human brain. To learn functional anatomy, such as how sensory information is processed in the human brain, students must form mental representations of 3D anatomical structures. Evidence suggests forming mental representations is easier for learners when they are presented with 3D models (i.e., different views can be rendered by translation and rotation), compared with 2D images (see Yammine and Violato (2015) for a meta-analysis). This benefit of 3D models, at least in part, is because piecing together multiple views from 2D images incurs a cognitive load that detracts from learning the content, especially for learners with lower visual-spatial ability (Bogomolova et al., 2020; Cui et al., 2016). Prior studies have suggested physical models are better than computer models for illustrating gross anatomy (Khot et al., 2013; Preece et al., 2013; Wainman et al., 2020; Wainman et al., 2018). However, physical models are limited in their potential to illustrate dynamic, functional processes, such as how neural signals are triggered by sensory input and propagate through a perceptual system. Given that our focus is on functional anatomy, we will focus our discussion on computer-based models only. The present study is part of the UW Virtual Brain Project, in which we have developed and assessed a new approach for teaching students about functional *Correspondence concerning this article should be addressed to Karen Schloss, University of Wisconsin-Madison, 330 North Orchard Street, Room 3178 Madison, WI 53715. E-mail: kschloss@wisc.edu 1 ar X iv :s ub m it/ 38 96 67 5 [ cs .H C ] 3 0 A ug 2 02 1 The UW Virtual Brain Project • February 2021 • Preprint anatomy of perceptual pathways. Previous computerbased 3D models of the human brain were geared toward teaching medical students about gross anatomy (Adams & Wilson, 2011; L. K. Allen et al., 2016; Cui et al., 2017; Drapkin et al., 2015; Ekstrand et al., 2018; Kockro et al., 2015; Stepan et al., 2017).1 In contrast, our lessons give learners guided, first-person view tours through “3D narrated diagrams” illustrating the functional anatomy of the human brain. We use the term “3D narrated diagram” to refer to 3D models combined with labels and verbal descriptions, analogous to content found in textbook diagrams with corresponding text. They can also include animations that illustrate dynamic aspects of the system. Thus, 3D models form the basis for the environment used to teach students about sensory input, system pathways, and system purposes, which are key learning outcomes in an undergraduate course on sensation and perception. Our aim was to develop structured, self-contained lessons for an undergraduate audience, which harnessed principles for effective multimedia learning (Mayer, 2009). These principles have previously been shown to facilitate learning in a variety of domains. For example, using visual cues to signal students where to look during a lesson can help them learn about neural structures (signaling principle) (Jamet et al., 2008). Learners benefit from having self-paced controls through a lesson, compared with experiencing system-paced continuous animation (segmenting principle) (Hasler et al., 2007). And, receiving input from multiple modalities (audio narration plus visual illustration) can be better than receiving visual input alone (modality principle) (Harskamp et al., 2007). The UW Virtual Brain 3D narrated diagrams can be viewed on personal computer monitors (referred to as “PC”; the same image is presented to both eyes) or in virtual reality using a head mounted display (HMD) with stereoscopic depth (referred to as “VR”; different images are presented to each eye 2). In the VR version, the brain is room-sized, so learners can “immerse” their whole body inside the brain. This study investigated whether students made significant gains in content-based learning outcomes from the Virtual Brain lessons, and whether viewing device (VR vs. PC) influenced the degree to which learners achieved content-based and experience-based learning outcomes. Content-based learning outcomes included being able to describe (draw/label) key brain regions and pathways involved in processing visual and auditory input. Experience-based learning outcomes included finding the lessons enjoyable and easy to use. We predicted that learners would make significant gains in content-based learning outcomes from lessons experienced in both VR and PC viewing (compared to a pre-test baseline), but VR viewing would be more effective. We also predicted VR would be more effective for achieving experience-based learning outcomes. Previous work strongly supports our prediction for experience-based learning outcomes, demonstrating that VR facilitates enjoyment, engagement, and motivation, compared with less immersive experiences (Hu-Au & Lee, 2017; Makransky et al., 2020; Pantelidis, 2010; Parong & Mayer, 2018; Stepan et al., 2017). However, prior evidence concerning our prediction that VR would better support content-based learning outcomes is mixed. Research on learning 3D structure and spatial layout suggests VR should facilitate learning, but research on narrated lessons suggests VR may hinder learning, as discussed below. Research on learning 3D anatomical structure suggests stereoscopic viewing facilitates learning compared to monoscopic viewing of the same models, at least when viewing is interactive. A meta-analysis reported that viewing interactive stereoscopic 3D models provided a significant benefit, compared with viewing interactive monoscopic 3D models (i.e., the same image was presented to both eyes, or the image was presented to one eye only) (Bogomolova et al., 2020). For example, Wainman et al. (2020) found students learned better when stereoscopically viewing 3D models compared to when one eye was covered while using a VR HMD. The additional depth information provided by stereopsis likely contributes to these enhanced learning outcomes (Bogomolova et al., 2020; Wainman et al., 2020). Evidence suggests that stereoscopic information is especially beneficial for 3D perception under interactive viewing conditions where head trackingbased motion parallax information and task feedback are available (Fulvio & Rokers, 2017), perhaps because viewers tend to discount stereoscopic information under passive viewing conditions (Fulvio et al., 2020). This may explain why the contribution of stereopsis to achieve learning outcomes was more limited under passive viewing (Al-Khalili & Coppoc, 2014) and fixed viewpoint rendering (Chen et al., 2012; Luursema et al., 2008). A separate line of studies testing the ability to remember spatial layout in new environments suggests 1Studies evaluating the efficacy of these 3D models used a myriad of comparison conditions that differed from the 3D models in multiple dimensions. Thus, it challenging to form general conclusions from their results (see Wainman et al. (2020) for a discussion of this issue). 2We note earlier literature used the term “VR” in reference to viewing 3D models on 2D displays (e.g., computer monitors), rather than immersive head mounted displays (see Wainman et al. (2020) for a discussion of this issue). In this article, we reserve the term “VR” for head mounted displays, like an Oculus Rift, Oculus Go, or HTC Vive.

[1]  Robert S. Kennedy,et al.  Simulator Sickness Questionnaire: An enhanced method for quantifying simulator sickness. , 1993 .

[2]  S. Hidi,et al.  The Four-Phase Model of Interest Development , 2006 .

[3]  Franco Pestilli,et al.  Altered white matter in early visual pathways of humans with amblyopia , 2015, Vision Research.

[4]  Claudio Violato,et al.  A meta‐analysis of the educational effectiveness of three‐dimensional visualization technologies in teaching anatomy , 2015, Anatomical sciences education.

[5]  T. D. Wilson,et al.  Evaluation of the effectiveness of 3D vascular stereoscopic models in anatomy instruction for first year medical students , 2017, Anatomical sciences education.

[6]  Timothy D. Wilson,et al.  Virtual cerebral ventricular system: An MR‐based three‐dimensional computer model , 2011, Anatomical sciences education.

[7]  Maria V. Sanchez-Vives,et al.  From presence to consciousness through virtual reality , 2005, Nature Reviews Neuroscience.

[8]  R. Mayer,et al.  Immersive virtual reality increases liking but not learning with a science simulation and generative learning strategies promote learning in immersive virtual reality. , 2020 .

[9]  Jacqueline M. Fulvio,et al.  Identifying Causes of and Solutions for Cybersickness in Immersive Technology: Reformulation of a Research and Development Agenda , 2020, Int. J. Hum. Comput. Interact..

[10]  S. Gutnikov,et al.  Virtual reality is more efficient in learning human heart anatomy especially for subjects with low baseline knowledge , 2020 .

[11]  Jeremy N. Bailenson,et al.  Immersive Virtual Reality Field Trips Facilitate Learning About Climate Change , 2018, Front. Psychol..

[12]  Hein Putter,et al.  Stereoscopic three‐dimensional visualisation technology in anatomy learning: A meta‐analysis , 2020, Medical education.

[13]  Chelsea Ekstrand,et al.  Immersive and interactive virtual reality to improve learning and retention of neuroanatomy in medical students: a randomized controlled study. , 2018, CMAJ open.

[14]  Alan Connelly,et al.  MRtrix: Diffusion tractography in crossing fiber regions , 2012, Int. J. Imaging Syst. Technol..

[15]  Alan Connelly,et al.  Direct estimation of the fiber orientation density function from diffusion-weighted MRI data using spherical deconvolution , 2004, NeuroImage.

[16]  Jesper Andersson,et al.  A multi-modal parcellation of human cerebral cortex , 2016, Nature.

[17]  Veronica S. Pantelidis,et al.  Reasons to Use Virtual Reality in Education and Training Courses and a Model to Determine When to Use Virtual Reality. , 2010 .

[18]  Bruce Wainman,et al.  The superiority of three‐dimensional physical models to two‐dimensional computer presentations in anatomy learning , 2018, Medical education.

[19]  Antonio Bernardo,et al.  Virtual Reality and Simulation in Neurosurgical Training. , 2017, World neurosurgery.

[20]  Richard E. Mayer,et al.  Adding immersive virtual reality to a science lab simulation causes more presence but less learning , 2017, Learning and Instruction.

[21]  B. Wainman,et al.  The relative effectiveness of computer‐based and traditional resources for education in anatomy , 2013, Anatomical sciences education.

[22]  Carolina Cruz-Neira,et al.  Surround-Screen Projection-Based Virtual Reality: The Design and Implementation of the CAVE , 2023 .

[23]  Joey J. Lee,et al.  Virtual reality in education: a tool for learning in the experience age , 2017 .

[24]  Kristen A. Lindgren,et al.  Development and assessment of a new 3D neuroanatomy teaching tool for MRI training , 2015, Anatomical sciences education.

[25]  Viktor K. Jirsa,et al.  The Virtual Brain: a simulator of primate brain network dynamics , 2013, Front. Neuroinform..

[26]  Piet Kommers,et al.  The role of stereopsis in virtual anatomical learning , 2008, Interact. Comput..

[27]  Susumu Mori,et al.  Fiber tracking: principles and strategies – a technical review , 2002, NMR in biomedicine.

[28]  Bas Rokers,et al.  Eye Movements , Strabismus , Amblyopia and Neuro-Ophthalmology Retinothalamic White Matter Abnormalities in Amblyopia , 2018 .

[29]  R. Mayer,et al.  Does the modality principle for multimedia learning apply to science classrooms , 2007 .

[30]  Gabriel Zachmann,et al.  Volumetric Medical Data Visualization for Collaborative VR Environments , 2020, EuroVR.

[31]  K. Ann Renninger,et al.  The Power of Interest for Motivation and Engagement , 2015 .

[32]  John Hickner,et al.  Let's get physical! , 2017, The Journal of family practice.

[33]  Michael M. Kazhdan,et al.  Poisson surface reconstruction , 2006, SGP '06.

[34]  Diego Vergara,et al.  The Technological Obsolescence of Virtual Reality Learning Environments , 2020, Applied Sciences.

[35]  Catherine Plaisant,et al.  Virtual memory palaces: immersion aids recall , 2018, Virtual Reality.

[36]  Roy Eagleson,et al.  Evaluation of an online three‐dimensional interactive resource for undergraduate neuroanatomy education , 2016, Anatomical sciences education.

[37]  Béatrice S. Hasler,et al.  Learner Control, Cognitive Load and Instructional Animation , 2007 .

[38]  A. Iloreta,et al.  Immersive virtual reality as a teaching tool for neuroanatomy , 2017, International Forum of Allergy and Rhinology.

[39]  Geoffrey J M Parker,et al.  A framework for a streamline‐based probabilistic index of connectivity (PICo) using a structural interpretation of MRI diffusion measurements , 2003, Journal of magnetic resonance imaging : JMRI.

[40]  Christina Amaxopoulou,et al.  Stereoscopic neuroanatomy lectures using a three-dimensional virtual reality environment. , 2015, Annals of anatomy = Anatomischer Anzeiger : official organ of the Anatomische Gesellschaft.

[41]  G. L. Coppoc,et al.  2D and 3D stereoscopic videos used as pre-anatomy lab tools improve students' examination performance in a veterinary gross anatomy course. , 2014, Journal of veterinary medical education.

[42]  M. Raichle,et al.  Tracking neuronal fiber pathways in the living human brain. , 1999, Proceedings of the National Academy of Sciences of the United States of America.

[43]  David H. Laidlaw,et al.  Effects of Stereo and Screen Size on the Legibility of Three-Dimensional Streamtube Visualization , 2012, IEEE Transactions on Visualization and Computer Graphics.

[44]  R. Mayer,et al.  Learning Science in Immersive Virtual Reality , 2018, Journal of Educational Psychology.

[45]  Jacqueline M. Fulvio,et al.  Cue-dependent effects of VR experience on motion-in-depth sensitivity , 2020, PloS one.

[46]  Roy A. Ruddle,et al.  Navigating Large-Scale Virtual Environments: What Differences Occur Between Helmet-Mounted and Desk-Top Displays? , 1999, Presence: Teleoperators & Virtual Environments.

[47]  Alan Connelly,et al.  Robust determination of the fibre orientation distribution in diffusion MRI: Non-negativity constrained super-resolved spherical deconvolution , 2007, NeuroImage.

[48]  Kai Lawonn,et al.  A Survey on Multimodal Medical Data Visualization , 2018, Comput. Graph. Forum.

[49]  Bas Rokers,et al.  Linking Neural and Clinical Measures of Glaucoma with Diffusion Magnetic Resonance Imaging (dMRI) , 2018 .

[50]  Jacqueline M. Fulvio,et al.  Use of cues in virtual reality depends on visual feedback , 2017, Scientific Reports.

[51]  Alan Connelly,et al.  Track-density imaging (TDI): Super-resolution white matter imaging using whole-brain track-density mapping , 2010, NeuroImage.

[52]  Neil A. Dodgson,et al.  Autostereoscopic 3D displays , 2005, Computer.

[53]  Bruce Fischl,et al.  FreeSurfer , 2012, NeuroImage.

[54]  T. D. Wilson,et al.  Stereoscopic vascular models of the head and neck: A computed tomography angiography visualization , 2016, Anatomical sciences education.

[55]  De-xin Zhao A method for brain 3D surface reconstruction from MR images , 2014 .

[56]  Bruce Wainman,et al.  The Critical Role of Stereopsis in Virtual and Mixed Reality Learning Environments , 2020, Anatomical sciences education.

[57]  Timothy Edward John Behrens,et al.  Characterization and propagation of uncertainty in diffusion‐weighted MR imaging , 2003, Magnetic resonance in medicine.

[58]  Eric Jamet,et al.  Attention guiding in multimedia learning , 2008 .

[59]  Andrea Gaggioli,et al.  Virtual Reality Training for Health-Care Professionals , 2003, Cyberpsychology Behav. Soc. Netw..