暂无分享,去创建一个
Amos J. Storkey | Elliot Crowley | Michael O'Boyle | Valentin Radu | Jack Turner | Elliot J. Crowley | José Cano | A. Storkey | M. O’Boyle | Jack Turner | Valentin Radu | José Cano
[1] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[2] Christopher D. Manning,et al. Fast dropout training , 2013, ICML.
[3] Dmitry P. Vetrov,et al. Variational Dropout Sparsifies Deep Neural Networks , 2017, ICML.
[4] Alexander Heinecke,et al. Anatomy of High-Performance Deep Learning Convolutions on SIMD Architectures , 2018, SC18: International Conference for High Performance Computing, Networking, Storage and Analysis.
[5] Timo Aila,et al. Pruning Convolutional Neural Networks for Resource Efficient Inference , 2016, ICLR.
[6] Scott A. Mahlke,et al. Scalpel: Customizing DNN pruning to the underlying hardware parallelism , 2017, 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA).
[7] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[8] Amos J. Storkey,et al. Characterising Across-Stack Optimisations for Deep Convolutional Neural Networks , 2018, 2018 IEEE International Symposium on Workload Characterization (IISWC).
[9] Danilo Comminiello,et al. Group sparse regularization for deep neural networks , 2016, Neurocomputing.
[10] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[11] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[12] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Amos J. Storkey,et al. Pruning neural networks: is it time to nip it in the bud? , 2018, ArXiv.
[14] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[15] Bo Chen,et al. NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications , 2018, ECCV.
[16] Amos J. Storkey,et al. Moonshine: Distilling with Cheap Convolutions , 2017, NeurIPS.
[17] Lucas Theis,et al. Faster gaze prediction with dense networks and Fisher pruning , 2018, ArXiv.
[18] Robert A. van de Geijn,et al. Anatomy of high-performance matrix multiplication , 2008, TOMS.
[19] Bo Chen,et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.
[20] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[21] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[22] Nikos Komodakis,et al. Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer , 2016, ICLR.
[23] John Tran,et al. cuDNN: Efficient Primitives for Deep Learning , 2014, ArXiv.
[24] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[25] Rich Caruana,et al. Do Deep Nets Really Need to be Deep? , 2013, NIPS.
[26] Misha Denil,et al. Predicting Parameters in Deep Learning , 2014 .
[27] Min Sun,et al. DPP-Net: Device-aware Progressive Search for Pareto-optimal Neural Architectures , 2018, ECCV.
[28] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[29] James Zijun Wang,et al. Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers , 2018, ICLR.
[30] Qiang Chen,et al. Network In Network , 2013, ICLR.
[31] Lorien Y. Pratt,et al. Comparing Biases for Minimal Network Construction with Back-Propagation , 1988, NIPS.
[32] Xiangyu Zhang,et al. Channel Pruning for Accelerating Very Deep Neural Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[33] Ariel D. Procaccia,et al. Variational Dropout and the Local Reparameterization Trick , 2015, NIPS.