Convolutional Neural Networks for Image Recognition in Mixed Reality Using Voice Command Labeling

In the context of the Industrial Internet of Things (IIoT), image and object recognition has become an important factor. Camera systems provide information to realize sophisticated monitoring applications, quality control solutions, or reliable prediction approaches. During the last years, the evolution of smart glasses has enabled new technical solutions as they can be seen as mobile and ubiquitous cameras. As an important aspect in this context, the recognition of objects from images must be reliably solved to realize the previously mentioned solutions. Therefore, algorithms need to be trained with labeled input to recognize differences in input images. We simplify this labeling process using voice commands in Mixed Reality. The generated input from the mixed-reality labeling is put into a convolutional neural network. The latter is trained to classify the images with different objects. In this work, we describe the development of this mixed-reality prototype with its backend architecture. Furthermore, we test the classification robustness with image distortion filters. We validated our approach with format parts from a blister machine provided by a pharmaceutical packaging company in Germany. Our results indicate that the proposed architecture is at least suitable for small classification problems and not sensitive to distortions.

[1]  R. DeMori,et al.  Handbook of pattern recognition and image processing , 1986 .

[2]  Douglas M. Hawkins,et al.  The Problem of Overfitting , 2004, J. Chem. Inf. Model..

[3]  Dumitru Erhan,et al.  Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[5]  Leon A. Gatys,et al.  Image Style Transfer Using Convolutional Neural Networks , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Fumio Kishino,et al.  Augmented reality: a class of displays on the reality-virtuality continuum , 1995, Other Conferences.

[7]  Andrew W. Fitzgibbon,et al.  KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera , 2011, UIST.

[8]  Ivan Laptev,et al.  Learning and Transferring Mid-level Image Representations Using Convolutional Neural Networks , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[9]  Philipp A. Rauschnabel,et al.  Augmented reality smart glasses: an investigation of technology acceptance drivers , 2016 .

[10]  Sebastian Scherer,et al.  VoxNet: A 3D Convolutional Neural Network for real-time object recognition , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[11]  Ah Chung Tsoi,et al.  Face recognition: a convolutional neural-network approach , 1997, IEEE Trans. Neural Networks.

[12]  Yoshua Bengio,et al.  Convolutional networks for images, speech, and time series , 1998 .

[13]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[14]  Sergey Ioffe,et al.  Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.

[15]  Jun Rekimoto,et al.  Matrix: a realtime object identification and registration method for augmented reality , 1998, Proceedings. 3rd Asia Pacific Computer Human Interaction (Cat. No.98EX110).