暂无分享,去创建一个
Wei Zhang | Wenjie Li | Litong Feng | Sheng Zhou | Ping Luo | Xinjiang Wang | Wenjie Li | P. Luo | Xinjiang Wang | W. Zhang | Litong Feng | Sheng Zhou
[1] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[2] Ping Luo,et al. Towards Understanding Regularization in Batch Normalization , 2018, ICLR.
[3] Kwang In Kim,et al. On Implicit Filter Level Sparsity in Convolutional Neural Networks , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Jiayu Dong,et al. Activation-Based Weight Significance Criterion for Pruning Deep Neural Networks , 2017, ICIG.
[5] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Frank Hutter,et al. SGDR: Stochastic Gradient Descent with Warm Restarts , 2016, ICLR.
[7] Takio Kurita,et al. Improvement of learning for CNN with ReLU activation by sparse regularization , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).
[8] Elad Eban,et al. MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[9] Richard H. R. Hahnloser,et al. On the piecewise analysis of networks of linear threshold neurons , 1998, Neural Networks.
[10] Lu Lu,et al. Dying ReLU and Initialization: Theory and Numerical Examples , 2019, Communications in Computational Physics.
[11] Xiangyu Zhang,et al. Channel Pruning for Accelerating Very Deep Neural Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[12] Kilian Q. Weinberger,et al. Snapshot Ensembles: Train 1, get M for free , 2017, ICLR.
[13] Andrea Vedaldi,et al. Instance Normalization: The Missing Ingredient for Fast Stylization , 2016, ArXiv.
[14] Kaiming He,et al. Focal Loss for Dense Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[15] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[16] Kiyoung Choi,et al. ComPEND: Computation Pruning through Early Negative Detection for ReLU in a Deep Neural Network Accelerator , 2018, ICS.
[17] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[18] Jascha Sohl-Dickstein,et al. A Mean Field Theory of Batch Normalization , 2019, ICLR.
[19] Song Han,et al. Efficient Sparse-Winograd Convolutional Neural Networks , 2018, ICLR.
[20] Max Welling,et al. Learning Sparse Neural Networks through L0 Regularization , 2017, ICLR.
[21] Carla P. Gomes,et al. Understanding Batch Normalization , 2018, NeurIPS.
[22] Ross B. Girshick,et al. Focal Loss for Dense Object Detection , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[23] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[24] C. John Glossner,et al. Dynamic Runtime Feature Map Pruning , 2018, PRCV.
[25] Taiji Suzuki,et al. Adam Induces Implicit Weight Sparsity in Rectifier Neural Networks , 2018, 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA).
[26] Andrew L. Maas. Rectifier Nonlinearities Improve Neural Network Acoustic Models , 2013 .
[27] Song Han,et al. Exploring the Regularity of Sparse Structure in Convolutional Neural Networks , 2017, ArXiv.
[28] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[29] Aleksander Madry,et al. How Does Batch Normalization Help Optimization? (No, It Is Not About Internal Covariate Shift) , 2018, NeurIPS.
[30] Yoshua Bengio,et al. Deep Sparse Rectifier Neural Networks , 2011, AISTATS.
[31] Song Han,et al. DSD: Dense-Sparse-Dense Training for Deep Neural Networks , 2016, ICLR.