暂无分享,去创建一个
Kathrin Klamroth | Michael Stiglmayr | Malena Reiners | K. Klamroth | Michael Stiglmayr | Malena Reiners
[1] Yanzhi Wang,et al. A Unified DNN Weight Compression Framework Using Reweighted Optimization Methods , 2020, ArXiv.
[2] J. Sacks. Asymptotic Distribution of Stochastic Approximation Procedures , 1958 .
[3] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[4] Pavlo Molchanov,et al. Importance Estimation for Neural Network Pruning , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[5] Mattan Erez,et al. PruneTrain: fast neural network training by dynamic sparse model reconfiguration , 2019, SC.
[6] Wei Wen,et al. DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures , 2019, ICLR.
[7] Suyog Gupta,et al. To prune, or not to prune: exploring the efficacy of pruning for model compression , 2017, ICLR.
[8] Xiangyu Zhang,et al. Channel Pruning for Accelerating Very Deep Neural Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[9] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[10] Bernhard Sendhoff,et al. Pareto-Based Multiobjective Machine Learning: An Overview and Case Studies , 2008, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).
[11] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[12] Jinwon Lee,et al. Learned Threshold Pruning , 2020, ArXiv.
[13] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[14] Jörg Fliege,et al. Steepest descent methods for multicriteria optimization , 2000, Math. Methods Oper. Res..
[15] Wojciech Samek,et al. Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning , 2019, Pattern Recognit..
[16] Ed H. Chi,et al. Understanding and Improving Knowledge Distillation , 2020, ArXiv.
[17] Yaochu Jin,et al. Multi-Objective Machine Learning (Studies in Computational Intelligence) (Studies in Computational Intelligence) , 2006 .
[18] Ran El-Yaniv,et al. Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations , 2016, J. Mach. Learn. Res..
[19] Jean-Antoine Désidéri,et al. A stochastic multiple gradient descent algorithm , 2018, Eur. J. Oper. Res..
[20] Jorge Nocedal,et al. Optimization Methods for Large-Scale Machine Learning , 2016, SIAM Rev..
[21] H. Robbins. A Stochastic Approximation Method , 1951 .
[22] E. Polak,et al. On Multicriteria Optimization , 1976 .
[23] Ricardo H. C. Takahashi,et al. Multi-Objective Algorithms for Neural Networks Learning , 2006, Multi-Objective Machine Learning.
[24] Changshui Zhang,et al. Sparse DNNs with Improved Adversarial Robustness , 2018, NeurIPS.
[25] Hao Cheng,et al. Adversarial Robustness vs. Model Compression, or Both? , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[26] Song Han,et al. ADC: Automated Deep Compression and Acceleration with Reinforcement Learning , 2018, ArXiv.
[27] Ricardo H. C. Takahashi,et al. Improving generalization of MLPs with multi-objective optimization , 2000, Neurocomputing.
[28] Thomas Brox,et al. Group Pruning using a Bounded-Lp norm for Group Gating and Regularization , 2019, GCPR.
[29] Suyun Liu,et al. The stochastic multi-gradient algorithm for multi-objective optimization and its application to supervised machine learning , 2019, Annals of Operations Research.
[30] Michael Carbin,et al. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks , 2018, ICLR.
[31] Frank Hutter,et al. Speeding Up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves , 2015, IJCAI.
[32] Slawomir Koziel,et al. Pareto Ranking Bisection Algorithm for EM-Driven Multi-Objective Design of Antennas in Highly-Dimensional Parameter Spaces , 2017, ICCS.
[33] Tomonari Furukawa,et al. Regularization for Parameter Identification Using Multi-Objective Optimization , 2006, Multi-Objective Machine Learning.
[34] Y. Aneja,et al. BICRITERIA TRANSPORTATION PROBLEM , 1979 .
[35] Yurong Chen,et al. Dynamic Network Surgery for Efficient DNNs , 2016, NIPS.
[36] Daming Shi,et al. Entropy Learning and Relevance Criteria for Neural Network Pruning , 2003, Int. J. Neural Syst..
[37] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[38] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[39] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[40] Yaochu Jin,et al. Multi-Objective Machine Learning , 2006, Studies in Computational Intelligence.
[41] Mingjie Sun,et al. Rethinking the Value of Network Pruning , 2018, ICLR.
[42] Léon Bottou,et al. On-line learning and stochastic approximations , 1999 .
[43] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[44] Hanan Samet,et al. Pruning Filters for Efficient ConvNets , 2016, ICLR.
[45] Indraneel Das. On characterizing the “knee” of the Pareto curve based on Normal-Boundary Intersection , 1999 .
[46] Hang Su,et al. Pruning from Scratch , 2019, AAAI.
[47] Yu Bai,et al. ProxQuant: Quantized Neural Networks via Proximal Operators , 2018, ICLR.
[48] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[49] Rafael Caballero,et al. Stochastic approach versus multiobjective approach for obtaining efficient solutions in stochastic multiobjective programming problems , 2002, Eur. J. Oper. Res..
[50] Markus Nagel,et al. Taxonomy and Evaluation of Structured Compression of Convolutional Neural Networks , 2019, ArXiv.
[51] Yanzhi Wang,et al. A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers , 2018, ECCV.
[52] Gregory J. Wolff,et al. Optimal Brain Surgeon and general network pruning , 1993, IEEE International Conference on Neural Networks.