Selecting optimal layer reduction factors for model reduction of deep neural networks
暂无分享,去创建一个
[1] Rui Peng,et al. Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures , 2016, ArXiv.
[2] Stan Matwin,et al. A review on particle swarm optimization algorithm and its variants to clustering high-dimensional data , 2013, Artificial Intelligence Review.
[3] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[4] L. Deng,et al. The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web] , 2012, IEEE Signal Processing Magazine.
[5] Ebru Arisoy,et al. Low-rank matrix factorization for Deep Neural Network training with high-dimensional output targets , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[6] Lior Wolf,et al. Channel-Level Acceleration of Deep Face Representations , 2015, IEEE Access.
[7] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[8] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[9] Yoshua Bengio,et al. Practical Recommendations for Gradient-Based Training of Deep Architectures , 2012, Neural Networks: Tricks of the Trade.
[10] Li Deng,et al. A tutorial survey of architectures, algorithms, and applications for deep learning , 2014, APSIPA Transactions on Signal and Information Processing.
[11] Pascal Vincent,et al. Unsupervised Feature Learning and Deep Learning: A Review and New Perspectives , 2012, ArXiv.
[12] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[13] Y. LeCun,et al. Learning methods for generic object recognition with invariance to pose and lighting , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..