Introducing a Virtual Assistant to the Lab: A Voice User Interface for the Intuitive Control of Laboratory Instruments

The introduction of smart virtual assistants (VAs) and corresponding smart devices brought a new degree of freedom to our everyday lives. Voice-controlled and Internet-connected devices allow intuitive device controlling and monitoring from all around the globe and define a new era of human–machine interaction. Although VAs are especially successful in home automation, they also show great potential as artificial intelligence-driven laboratory assistants. Possible applications include stepwise reading of standard operating procedures (SOPs) and recipes, recitation of chemical substance or reaction parameters to a control, and readout of laboratory devices and sensors. In this study, we present a retrofitting approach to make standard laboratory instruments part of the Internet of Things (IoT). We established a voice user interface (VUI) for controlling those devices and reading out specific device data. A benchmark of the established infrastructure showed a high mean accuracy (95% ± 3.62) of speech command recognition and reveals high potential for future applications of a VUI within the laboratory. Our approach shows the general applicability of commercially available VAs as laboratory assistants and might be of special interest to researchers with physical impairments or low vision. The developed solution enables a hands-free device control, which is a crucial advantage within the daily laboratory routine.

[1]  Dominik Geier,et al.  A smart device application for the automated determination of E. coli colonies on agar plates , 2017, Engineering in life sciences.

[2]  Goutam Saha,et al.  Speaker verification with short utterances: a review of challenges, trends and opportunities , 2017, IET Biom..

[3]  Hong Linh Truong,et al.  MQTT-S — A publish/subscribe protocol for Wireless Sensor Networks , 2008, 2008 3rd International Conference on Communication Systems Software and Middleware and Workshops (COMSWARE '08).

[4]  Vicente Traver,et al.  Evaluation of Google Glass Technical Limitations on Their Integration in Medical Systems , 2016, Sensors.

[5]  G.R. Doddington,et al.  Speaker recognition—Identifying people by their voices , 1985, Proceedings of the IEEE.

[6]  Peter Neubauer,et al.  Design of experiments‐based high‐throughput strategy for development and optimization of efficient cell disruption protocols , 2017, Engineering in life sciences.

[7]  Erik Cambria,et al.  Jumping NLP Curves: A Review of Natural Language Processing Research [Review Article] , 2014, IEEE Computational Intelligence Magazine.

[8]  Robert Dale The commercial NLP landscape in 2017 , 2017, Nat. Lang. Eng..

[9]  Tianyi Wu,et al.  An Intelligent Automation Platform for Rapid Bioprocess Design , 2014, Journal of laboratory automation.

[10]  Tao Chen,et al.  Accent Issues in Large Vocabulary Continuous Speech Recognition , 2004, Int. J. Speech Technol..

[11]  Robert Dale The pros and cons of listening devices , 2017, Nat. Lang. Eng..

[12]  Howard A Young Scientific Apps are here (and more will be coming). , 2012, Cytokine.

[13]  Ingrid Schmid,et al.  A scalable software framework for data integration in bioprocess development , 2017, Engineering in life sciences.

[14]  Samuel Ken-En Gan,et al.  The world of biomedical apps: their uses, limitations, and potential , 2016 .

[15]  Subash C B Gopinath,et al.  Bacterial detection: from microscope to smartphone. , 2014, Biosensors & bioelectronics.