The interaction based on virtual hand is the main way of interaction in virtual reality. However, the three-dimensional visual angle of the virtual scene makes the objects in the interactive scene appear near large and far small. The objects near the edge of the visual area are visually stretched and deformed. The relative position of the virtual hand and the real hand deviates seriously, and the immersion sense of virtual real interaction is greatly reduced. In order to solve this problem, a vision correction algorithm based on the position of virtual hand is proposed. According to the position of virtual hand in the scene, the position of virtual hand is corrected to make it coincide with the position of real hand in space and perspective. This method corrects the sense of dislocation when users interact, improves the interaction authenticity experience, and improves the success rate of interaction to a certain extent. The experimental results show that the position of virtual hand is corrected, the user experience is improved, and the interaction efficiency is also improved.
[1]
Michela Ott,et al.
The potential relevance of cognitive neuroscience for the development and use of technology-enhanced learning
,
2015
.
[2]
Daniel Thalmann,et al.
3D Convolutional Neural Networks for Efficient and Robust Hand Pose Estimation from Single Depth Images
,
2017,
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3]
W. Buxton.
Human-Computer Interaction
,
1988,
Springer Berlin Heidelberg.
[4]
Marjorie Skubic,et al.
Comparison of 3D Joint Angles Measured With the Kinect 2.0 Skeletal Tracker Versus a Marker-Based Motion Capture System.
,
2017,
Journal of applied biomechanics.
[5]
Mubarak Shah,et al.
Visual gesture recognition
,
1994
.