Research on Multimodal Perceptual Navigational Virtual and Real Fusion Intelligent Experiment Equipment and Algorithm

Virtual experiment is an important field of human-computer interaction. With more and more virtual laboratories emerging, we found that problems regarding virtual experiments are rising. Such problems can be listed as follows: First, human-computer interaction has lower efficiency during the process of virtual experiment, which means the computer cannot understand the user’s intention thus leading to incorrect operation. Second, there are less detections for false behavior during experiments. Third, the virtual laboratory’s sense of operation and realism is not strong. In order to solve the above problems, the multimodal sensing navigation virtual and real fusion laboratory (MSNVRFL) was designed and implemented in this paper. We design a new set of experimental equipment with the function of cognition and study a multimodal fusion model and algorithm for chemical experiments, which are both finally verified and applied in MSNVRFL. By using multimodal fusion perception algorithm, the user’s true intentions can be understood and the human-computer interaction efficiency can be improved. By carrying out a virtual experiment with the mold of virtual and real fusion, problems like resources wasting and dangers happened during experiment can be avoided, user’s sense of operation and realism can be improved. In addition, teaching navigation and wrong operation behavior reminders are provided for users. The experimental result shows that our method can improve the efficiency of human-computer interaction, reduce the user’s cognitive load, strengthen the user’s sense of reality and operation and stimulate students’ interest in learning.

[1]  Davide Marocco,et al.  Using Haptic Technology for Education in Chemistry , 2015, 2015 Fifth International Conference on e-Learning (econf).

[2]  Yuchun Fang,et al.  Multimodal Fusion of Spatial-Temporal Features for Emotion Recognition in the Wild , 2017, PCM.

[3]  Sehat Ullah,et al.  The Effect of Multimodal Virtual Chemistry Laboratory on Students' Learning Improvement , 2014, AVR.

[4]  Kaoru Hirota,et al.  Emotion recognition based on human gesture and speech information using RT middleware , 2011, 2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011).

[5]  Jon Miller,et al.  A desktop virtual environment trainer provides superior retention of a spatial assembly skill , 1998, CHI Conference Summary.

[6]  Gabriel Gorghiu,et al.  Using virtual experiments in the teaching process , 2009 .

[7]  Leo Piilonen,et al.  Belle2VR: A Virtual-Reality Visualization of Subatomic Particle Physics in the Belle II Experiment , 2018, IEEE Computer Graphics and Applications.

[8]  Eyal Ofek,et al.  RealityCheck: Blending Virtual Environments with Situated Physical Reality , 2019, CHI.

[9]  Nicu Sebe,et al.  Analyzing Free-standing Conversational Groups: A Multimodal Approach , 2015, ACM Multimedia.

[10]  Tuck Wah Ng,et al.  Developing and Demonstrating an Augmented Reality Colorimetric Titration Tool. , 2018 .

[11]  Derek Woodgate,et al.  Children’s Interactions Within a Virtual Reality Environment for Learning Chemistry , 2017 .

[12]  Helen M. Meng,et al.  Latent Semantic Analysis for Multimodal User Input With Speech and Gestures , 2014, IEEE/ACM Transactions on Audio, Speech, and Language Processing.

[13]  Nicu Sebe,et al.  Multimodal Human Computer Interaction: A Survey , 2005, ICCV-HCI.

[14]  Qiang Ji,et al.  An Immersive System with Multi-Modal Human-Computer Interaction , 2018, 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018).

[15]  Juan Carlos Guevara Bolaños,et al.  Virtual laboratory for supporting chemistry learning and practicing , 2010, IEEE EDUCON 2010 Conference.

[16]  Enrique Herrera-Viedma,et al.  Virtual and remote labs in education: A bibliometric analysis , 2016, Comput. Educ..

[17]  Huei-Tse Hou,et al.  The Development and Evaluation of an Educational Game Integrated with Augmented Reality and Virtual Laboratory for Chemistry Experiment Learning , 2017, 2017 6th IIAI International Congress on Advanced Applied Informatics (IIAI-AAI).

[18]  Dylan M. Jones,et al.  Navigating Buildings in "Desk-Top" Virtual Environments: Experimental Investigations Using Extended Navigational Experience , 1997 .

[19]  Dejan Dinevski,et al.  Using a virtual laboratory to better understand chemistry - An experimental study on acquiring knowledge , 2012, Proceedings of the ITI 2012 34th International Conference on Information Technology Interfaces.

[20]  Martin Hachet,et al.  Beyond the mouse: Understanding user gestures for manipulating 3D objects from touchscreen inputs , 2012, Comput. Graph..

[21]  Mikhail Morozov,et al.  Virtual chemistry laboratory for school education , 2004, IEEE International Conference on Advanced Learning Technologies, 2004. Proceedings..

[22]  M. Mehta,et al.  MULTIMODAL INPUT FUSION IN HUMAN-COMPUTER INTERACTION On the Example of the NICE Project , 2003 .

[23]  Danny Crookes,et al.  Multimodal Biometric Human Recognition for Perceptual Human–Computer Interaction , 2010, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[24]  Maria Teresa Restivo,et al.  Adding augmented reality to laboratory experimentation , 2017, 2017 4th Experiment@International Conference (exp.at'17).

[25]  Zhao Gang,et al.  Three-dimensional virtual chemical laboratory based on virtual reality modeling language , 2008, 2008 IEEE International Symposium on IT in Medicine and Education.

[26]  Lihui Wang,et al.  Towards Robust Human-Robot Collaborative Manufacturing: Multimodal Fusion , 2018, IEEE Access.

[27]  Lawrence O. Kehinde,et al.  Development of a Suite of Virtual Experiments for Physics and Chemistry Undergraduate Laboratories , 2014 .

[28]  Dong-Soo Kwon,et al.  Decision-Level Fusion Method for Emotion Recognition using Multimodal Emotion Recognition Information , 2018, 2018 15th International Conference on Ubiquitous Robots (UR).

[29]  Alexander H. Waibel,et al.  Smart Sight: a tourist assistant system , 1999, Digest of Papers. Third International Symposium on Wearable Computers.

[30]  Jianhua Tao,et al.  Intelligence methods of multi-modal information fusion in human-computer interaction , 2018 .

[31]  Marc Erich Latoschik,et al.  Usability of Gamified Knowledge Learning in VR and Desktop-3D , 2019, CHI.

[32]  Sandra G. Hart,et al.  Nasa-Task Load Index (NASA-TLX); 20 Years Later , 2006 .

[33]  Cheng Cheng,et al.  Complex Event Processing for Intent Understanding in Virtual Environments , 2017 .

[34]  A. Ortony,et al.  Similarity and Analogical Reasoning , 1991 .

[35]  Frédéric Lerasle,et al.  Perceiving user's intention-for-interaction: A probabilistic multimodal data fusion scheme , 2015, 2015 IEEE International Conference on Multimedia and Expo (ICME).

[36]  Hong Zhu,et al.  User Intent for Virtual Environments , 2016 .