Manipulating the fidelity of lower extremity visual feedback to identify obstacle negotiation strategies in immersive virtual reality

The ability to successfully navigate obstacles in our environment requires integration of visual information about the environment with estimates of our body's state. Previous studies have used partial occlusion of the visual field to explore how information about the body and impending obstacles are integrated to mediate a successful clearance strategy. However, because these manipulations often remove information about both the body and obstacle, it remains to be seen how information about the lower extremities alone is utilized during obstacle crossing. Here, we used an immersive virtual reality (VR) interface to explore how visual feedback of the lower extremities influences obstacle crossing performance. Participants wore a head-mounted display while walking on treadmill and were instructed to step over obstacles in a virtual corridor in four different feedback trials. The trials involved: (1) No visual feedback of the lower extremities, (2) an endpoint-only model, (3) a link-segment model, and (4) a volumetric multi-segment model. We found that the volumetric model improved success rate, placed their trailing foot before crossing and leading foot after crossing more consistently, and placed their leading foot closer to the obstacle after crossing compared to no model. This knowledge is critical for the design of obstacle negotiation tasks in immersive virtual environments as it may provide information about the fidelity necessary to reproduce ecologically valid practice environments.

[1]  Renato Moraes,et al.  The effects of distant and on-line visual information on the control of approach phase and step over an obstacle during locomotion , 2004, Experimental Brain Research.

[2]  Michael Greig,et al.  Any way you look at it, successful obstacle negotiation needs visually guided on-line foot placement regulation during the approach phase , 2006, Neuroscience Letters.

[3]  Shirley Rietdyk,et al.  Visual exteroceptive information provided during obstacle crossing did not modify the lower limb trajectory , 2007, Neuroscience Letters.

[4]  Boris M. Velichkovsky,et al.  The perception of egocentric distances in virtual environments - A review , 2013, ACM Comput. Surv..

[5]  Mel Slater,et al.  Visual Realism Enhances Realistic Response in an Immersive Virtual Environment , 2009, IEEE Computer Graphics and Applications.

[6]  Uwe Kloos,et al.  The influence of eye height and avatars on egocentric distance estimates in immersive virtual environments , 2011, APGV '11.

[7]  David B Elliott,et al.  Utility of Peripheral Visual Cues in Planning and Controlling Adaptive Gait , 2010, Optometry and vision science : official publication of the American Academy of Optometry.

[8]  Victoria Interrante,et al.  Analyzing the effect of a virtual avatar's geometric and motion fidelity on ego-centric spatial perception in immersive virtual environments , 2009, VRST '09.

[9]  A. Schultz,et al.  Stepping over obstacles: gait patterns of healthy young and old adults. , 1991, Journal of gerontology.

[10]  Matthew A Timmis,et al.  Obstacle crossing during locomotion: visual exproprioceptive information is used in an online mode to update foot placement before the obstacle but not swing trajectory over it. , 2012, Gait & posture.

[11]  Heinrich H. Bülthoff,et al.  The Effect of Viewing a Self-Avatar on Distance Judgments in an HMD-Based Virtual Environment , 2010, PRESENCE: Teleoperators and Virtual Environments.

[12]  A. Patla How Is Human Gait Controlled by Vision , 1998 .

[13]  Bobby Bodenheimer,et al.  Affordance Judgments in HMD-Based Virtual Environments: Stepping over a Pole and Stepping off a Ledge , 2015, TAP.