Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network
暂无分享,去创建一个
[1] Philip H. S. Torr,et al. SNIP: Single-shot Network Pruning based on Connection Sensitivity , 2018, ICLR.
[2] David S. Doermann,et al. Projection Convolutional Neural Networks for 1-bit CNNs via Discrete Back Propagation , 2018, AAAI.
[3] Gilad Yehudai,et al. Proving the Lottery Ticket Hypothesis: Pruning is All You Need , 2020, ICML.
[4] Kaiming He,et al. Exploring Randomly Wired Neural Networks for Image Recognition , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[5] Roger B. Grosse,et al. Picking Winning Tickets Before Training by Preserving Gradient Flow , 2020, ICLR.
[6] Nathan Srebro,et al. Exploring Generalization in Deep Learning , 2017, NIPS.
[7] Chia-Wen Lin,et al. SiMaN: Sign-to-Magnitude Network Binarization , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[8] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[9] Yue Wang,et al. Drawing early-bird tickets: Towards more efficient training of deep networks , 2019, ICLR.
[10] Xianglong Liu,et al. Balanced Binary Neural Networks with Gated Residual , 2020, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[11] Shuchang Zhou,et al. DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients , 2016, ArXiv.
[12] James O' Neill. An Overview of Neural Network Compression , 2020, ArXiv.
[13] Mingjie Sun,et al. Rethinking the Value of Network Pruning , 2018, ICLR.
[14] Georgios Tzimiropoulos,et al. XNOR-Net++: Improved binary neural networks , 2019, BMVC.
[15] Jonghyun Choi,et al. Learning Architectures for Binary Networks , 2020, ECCV.
[16] Dacheng Tao,et al. Searching for Low-Bit Weights in Quantized Neural Networks , 2020, NeurIPS.
[17] Ali Farhadi,et al. What’s Hidden in a Randomly Weighted Neural Network? , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[18] Georgios Tzimiropoulos,et al. Training Binary Neural Networks with Real-to-Binary Convolutions , 2020, ICLR.
[19] Wei Pan,et al. Towards Accurate Binary Convolutional Neural Network , 2017, NIPS.
[20] Michael Carbin,et al. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks , 2018, ICLR.
[21] David J. Schwab,et al. Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs , 2020, ICLR.
[22] Georgios Tzimiropoulos,et al. BATS: Binary ArchitecTure Search , 2020, ECCV.
[23] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[24] Laurent Orseau,et al. Logarithmic Pruning is All You Need , 2020, NeurIPS.
[25] Yan Wang,et al. Rotated Binary Neural Network , 2020, NeurIPS.
[26] Ali Farhadi,et al. Discovering Neural Wirings , 2019, NeurIPS.
[27] Ankit Pensia,et al. Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization is Sufficient , 2020, NeurIPS.
[28] Ali Farhadi,et al. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016, ECCV.
[29] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[30] Yu Bai,et al. ProxQuant: Quantized Neural Networks via Proximal Operators , 2018, ICLR.
[31] Hang Su,et al. Pruning from Scratch , 2019, AAAI.
[32] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, NIPS.
[33] G. Hua,et al. LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks , 2018, ECCV.
[34] Xianglong Liu,et al. Forward and Backward Information Retention for Accurate Binary Neural Networks , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[35] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[36] James T. Kwok,et al. Loss-aware Binarization of Deep Networks , 2016, ICLR.
[37] Jason Yosinski,et al. Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask , 2019, NeurIPS.
[38] Wei Liu,et al. Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm , 2018, ECCV.
[39] Yoshua Bengio,et al. BinaryConnect: Training Deep Neural Networks with binary weights during propagations , 2015, NIPS.
[40] Nicu Sebe,et al. Binary Neural Networks: A Survey , 2020, Pattern Recognit..
[41] Ah Chung Tsoi,et al. Universal Approximation Using Feedforward Neural Networks: A Survey of Some Existing Methods, and Some New Results , 1998, Neural Networks.
[42] Georgios Tzimiropoulos,et al. High-Capacity Expert Binary Networks , 2020, ICLR.
[43] Yoshua Bengio,et al. Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation , 2013, ArXiv.
[44] Adam Gaier,et al. Weight Agnostic Neural Networks , 2019, NeurIPS.
[45] Enhua Wu,et al. Training Binary Neural Networks through Learning with Noisy Supervision , 2020, ICML.
[46] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[47] Lin Xu,et al. Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights , 2017, ICLR.
[48] Xianglong Liu,et al. Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[49] Stephen P. Boyd,et al. Sensor Selection via Convex Optimization , 2009, IEEE Transactions on Signal Processing.
[50] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[51] Andrew McCallum,et al. Energy and Policy Considerations for Deep Learning in NLP , 2019, ACL.
[52] Masafumi Hagiwara,et al. Removal of hidden units and weights for back propagation networks , 1993, Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan).
[53] Adam R. Klivans,et al. Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection , 2020, ICML.