Multi-Channel Human-Computer Cooperative Interaction Algorithm in Virtual Scene

With the development of computer technology, virtual reality technology has also been widely developed, among which human-computer interaction technology based on virtual hand is a very key technology in the field of virtual reality. However, in the process of interaction with virtual reality technology, there are often virtual objects which are too far away from each other and blocked, which make users unable to operate the interactive virtual objects accurately and efficiently. Therefore, this paper proposes a multi-channel fusion of human-machine collaborative interaction algorithm, first of all, through the Intel RealSense estimate the gesture in driving virtual hand, using the iFLYTEK speech recognition SDK user's speech input, and then to identify voice syntax segmentation of user operation intention, finally get through to the virtual scene scene perception scene active transformation to assist users to complete the operation. Experiments show that the algorithm can help users to complete interactive tasks accurately and efficiently.

[1]  Laurent Grisoni,et al.  The design and evaluation of 3D positioning techniques for multi-touch displays , 2010, 2010 IEEE Symposium on 3D User Interfaces (3DUI).

[2]  Sandra G. Hart,et al.  Nasa-Task Load Index (NASA-TLX); 20 Years Later , 2006 .

[3]  Robert W. Lindeman,et al.  The effect of 3D widget representation and simulated surface constraints on interaction in virtual environments , 2001, Proceedings IEEE Virtual Reality 2001.

[4]  Qinping Zhao,et al.  A survey on virtual reality , 2009, Science in China Series F: Information Sciences.

[5]  Alexei Sourin,et al.  Mid-air interaction with optical tracking for 3D modeling , 2018, Comput. Graph..

[6]  Christoph W. Borst,et al.  Visual feedback for virtual grasping , 2014, 2014 IEEE Symposium on 3D User Interfaces (3DUI).

[7]  Wanxiang Che,et al.  LTP: A Chinese Language Technology Platform , 2010, COLING.