Design and Optimization of the Model for Traffic Signs Classification Based on Convolutional Neural Networks

Recently, convolutional neural networks (CNNs) demonstrate state-of-the-art performance in computer vision such as classification, recognition and detection. In this paper, a traffic signs classification system based on CNNs is proposed. Generally, a convolutional network usually has a large number of parameters which need millions of data and a great deal of time to train. To solve this problem, the strategy of transfer learning is utilized in this paper. Besides, further improvement is implemented on the chosen model to improve the performance of the network by changing some fully connected layers into convolutional connection. This is because that the weight shared feature of convolutional layers is able to reduce the number of parameters contained in a network. In addition, these convolutional kernels are decomposed into multi-layer and smaller convolutional kernels to get a better performance. Finally, the performance of the final optimized network is compared with unoptimized networks. Experimental results demonstrate that the final optimized network presents the best performance.

[1]  Arnold W. M. Smeulders,et al.  Structured Receptive Fields in CNNs , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Yann LeCun,et al.  What is the best multi-stage architecture for object recognition? , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[3]  Uwe Stilla,et al.  Deep Learning Earth Observation Classification Using ImageNet Pretrained Networks , 2016, IEEE Geoscience and Remote Sensing Letters.

[4]  Nima Tajbakhsh,et al.  Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning? , 2016, IEEE Transactions on Medical Imaging.

[5]  Joo-Hyun Lee,et al.  An efficient pruning and weight sharing method for neural network , 2016, 2016 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia).

[6]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[7]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[8]  Joydeep Ghosh,et al.  Best-bases feature extraction algorithms for classification of hyperspectral data , 2001, IEEE Trans. Geosci. Remote. Sens..

[9]  Yingxin Gu,et al.  An Optimal Sample Data Usage Strategy to Minimize Overfitting and Underfitting Effects in Regression Tree Models Based on Remotely-Sensed Data , 2016, Remote. Sens..

[10]  Sen Jia,et al.  Convolutional neural networks for hyperspectral image classification , 2017, Neurocomputing.

[11]  Jake Bouvrie,et al.  Notes on Convolutional Neural Networks , 2006 .

[12]  Jing Pan,et al.  Learning Pooling for Convolutional Neural Network , 2017, Neurocomputing.

[13]  Richard J. Murphy,et al.  Hyperspectral CNN Classification with Limited Training Samples , 2016, BMVC.

[14]  Ronald M. Summers,et al.  Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning , 2016, IEEE Transactions on Medical Imaging.

[15]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[16]  C. L. Philip Chen,et al.  The equivalence between fuzzy logic systems and feedforward neural networks , 2000, IEEE Trans. Neural Networks Learn. Syst..

[17]  Yoshua Bengio,et al.  Deep Sparse Rectifier Neural Networks , 2011, AISTATS.

[18]  B.M. Wilamowski,et al.  Neural network architectures and learning algorithms , 2009, IEEE Industrial Electronics Magazine.

[19]  Osonde Osoba,et al.  Noise-enhanced convolutional neural networks , 2016, Neural Networks.

[20]  Xiaodong Cui,et al.  Data Augmentation for deep neural network acoustic modeling , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).