Multi- perspective gesture recognition based on convolutional neural network

Gesture recognition is widely used in life, but single-angle gesture recognition has certain limitations. In this paper,we use the Multi-Perspective Static Gesture Database for experimental training. This database has 24 letter gestures (except j gesture and z gesture) in the international sign language alphabet. We use the self-designed convolution network to train each picture. The feature map is obtained by multiplying the feature map by the trained weights, and the information amount of each picture is obtained, and the information amount is combined with the prediction probability of each picture for each gesture, and the predicted probability after the combination can be obtained. The largest prediction probability is the predicted gesture. By contrast, the prediction accuracy of the combination of the four angles is higher than that of the single image. At the same time, the paper also expands, chooses two angles combined with three angles to compare with single angle, the prediction accuracy is also higher than the accuracy of single prediction, which shows that the method is effective, using multi-angle gesture pictures Combining can improve prediction accuracy.