Mirror Descent View for Neural Network Quantization
暂无分享,去创建一个
Philip H. S. Torr | Puneet K. Dokania | Richard Hartley | Thalaiyasingam Ajanthan | Kartik Gupta | R. Hartley | P. Dokania | Thalaiyasingam Ajanthan | Kartik Gupta
[1] Jack Xin,et al. Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets , 2019, ICLR.
[2] Philip H. S. Torr,et al. Proximal Mean-Field for Neural Network Quantization , 2018, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[3] Ran El-Yaniv,et al. Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations , 2016, J. Mach. Learn. Res..
[4] Ian D. Reid,et al. Towards Effective Low-Bitwidth Convolutional Neural Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[5] Kwang-Ting Cheng,et al. Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization , 2019, NeurIPS.
[6] Volkan Cevher,et al. Mirrored Langevin Dynamics , 2018, NeurIPS.
[7] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, ArXiv.
[8] Mark Sandler,et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[9] Shuchang Zhou,et al. DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients , 2016, ArXiv.
[10] Anke Schmeink,et al. Variational Network Quantization , 2018, ICLR.
[11] Xianglong Liu,et al. Forward and Backward Information Retention for Accurate Binary Neural Networks , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Stephen P. Boyd,et al. Mirror descent in non-convex stochastic programming , 2017, ArXiv.
[13] Miguel Á. Carreira-Perpiñán,et al. Model compression as constrained optimization, with application to neural nets. Part II: quantization , 2017, ArXiv.
[14] Enhua Wu,et al. Training Binary Neural Networks through Learning with Noisy Supervision , 2020, ICML.
[15] Mark D. McDonnell,et al. Training wide residual networks for deployment using a single bit for each weight , 2018, ICLR.
[16] Hanan Samet,et al. Training Quantized Nets: A Deeper Understanding , 2017, NIPS.
[17] Yunhui Guo,et al. A Survey on Methods and Theories of Quantized Neural Networks , 2018, ArXiv.
[18] Wei Liu,et al. Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm , 2018, ECCV.
[19] Yanzhi Wang,et al. Progressive DNN Compression: A Key to Achieve Ultra-High Weight Pruning and Quantization Rates using ADMM , 2019, ArXiv.
[20] Kilian Q. Weinberger,et al. Snapshot Ensembles: Train 1, get M for free , 2017, ICLR.
[21] Lei Zhang,et al. Bit-Scalable Deep Hashing With Regularized Similarity Learning for Image Retrieval and Person Re-Identification , 2015, IEEE Transactions on Image Processing.
[22] Kwang-Ting Cheng,et al. ReActNet: Towards Precise Binary Neural Network with Generalized Activation Functions , 2020, ECCV.
[23] Max Welling,et al. Bayesian Compression for Deep Learning , 2017, NIPS.
[24] Niao He,et al. On the Convergence Rate of Stochastic Mirror Descent for Nonsmooth Nonconvex Optimization , 2018, 1806.04781.
[25] Dharmendra S. Modha,et al. Backpropagation for Energy-Efficient Neuromorphic Computing , 2015, NIPS.
[26] Yoshua Bengio,et al. BinaryConnect: Training Deep Neural Networks with binary weights during propagations , 2015, NIPS.
[27] Sham M. Kakade,et al. Towards Minimax Policies for Online Linear Optimization with Bandit Feedback , 2012, COLT.
[28] Stephen P. Boyd,et al. Stochastic Mirror Descent in Variationally Coherent Optimization Problems , 2017, NIPS.
[29] Guodong Guo,et al. GBCNs: Genetic Binary Convolutional Networks for Enhancing the Performance of 1-bit DCNNs , 2019, AAAI 2020.
[30] Ali Farhadi,et al. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016, ECCV.
[31] Stephen P. Boyd,et al. Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.
[32] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[33] John Darzentas,et al. Problem Complexity and Method Efficiency in Optimization , 1983 .
[34] Yurong Chen,et al. Explicit Loss-Error-Aware Quantization for Low-Bit Deep Neural Networks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[35] Marc Teboulle,et al. Mirror descent and nonlinear projected subgradient methods for convex optimization , 2003, Oper. Res. Lett..
[36] David S. Doermann,et al. Projection Convolutional Neural Networks for 1-bit CNNs via Discrete Back Propagation , 2018, AAAI.
[37] Max Welling,et al. Relaxed Quantization for Discretized Neural Networks , 2018, ICLR.
[38] Sébastien Bubeck,et al. Convex Optimization: Algorithms and Complexity , 2014, Found. Trends Mach. Learn..
[39] Georgios Tzimiropoulos,et al. Training Binary Neural Networks with Real-to-Binary Convolutions , 2020, ICLR.
[40] Sinno Jialin Pan,et al. MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization , 2019, NeurIPS.
[41] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[42] James T. Kwok,et al. Loss-aware Binarization of Deep Networks , 2016, ICLR.
[43] Miguel Á. Carreira-Perpiñán,et al. Model compression as constrained optimization, with application to neural nets. Part I: general framework , 2017, ArXiv.
[44] Shenghuo Zhu,et al. Extremely Low Bit Neural Network: Squeeze the Last Bit Out with ADMM , 2017, AAAI.
[45] Houqiang Li,et al. Quantization Networks , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[46] Yu Bai,et al. ProxQuant: Quantized Neural Networks via Proximal Operators , 2018, ICLR.
[47] Philip H. S. Torr,et al. SNIP: Single-shot Network Pruning based on Connection Sensitivity , 2018, ICLR.
[48] H. Brendan McMahan,et al. A survey of Algorithms and Analysis for Adaptive Online Learning , 2014, J. Mach. Learn. Res..
[49] Luca Antiga,et al. Automatic differentiation in PyTorch , 2017 .
[50] Jack Xin,et al. BinaryRelax: A Relaxation Approach For Training Deep Neural Networks With Quantized Weights , 2018, SIAM J. Imaging Sci..
[51] Kaiming He,et al. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour , 2017, ArXiv.
[52] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.