暂无分享,去创建一个
Nicholas Rhinehart | Kris M. Kitani | Fares Beainy | Anubhav Ashok | Fares N. Beainy | Nicholas Rhinehart | A. Ashok
[1] Forrest N. Iandola,et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.
[2] Timo Aila,et al. Pruning Convolutional Neural Networks for Resource Efficient Inference , 2016, ICLR.
[3] Dragomir Anguelov,et al. Self-informed neural network structure learning , 2014, ICLR.
[4] Matthew Richardson,et al. Do Deep Convolutional Nets Really Need to be Deep and Convolutional? , 2016, ICLR.
[5] Yann LeCun,et al. What is the best multi-stage architecture for object recognition? , 2009, 2009 IEEE 12th International Conference on Computer Vision.
[6] Gregory J. Wolff,et al. Optimal Brain Surgeon and general network pruning , 1993, IEEE International Conference on Neural Networks.
[7] Jiri Matas,et al. All you need is a good init , 2015, ICLR.
[8] Wojciech Zaremba,et al. An Empirical Exploration of Recurrent Network Architectures , 2015, ICML.
[9] Wonyong Sung,et al. Structured Pruning of Deep Convolutional Neural Networks , 2015, ACM J. Emerg. Technol. Comput. Syst..
[10] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[11] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[12] Quoc V. Le,et al. Neural Architecture Search with Reinforcement Learning , 2016, ICLR.
[13] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[14] Jasper Snoek,et al. Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.
[15] Forrest N. Iandola,et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.
[16] Rich Caruana,et al. Model compression , 2006, KDD '06.
[17] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[18] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[19] Elliot Meyerson,et al. Evolving Deep Neural Networks , 2017, Artificial Intelligence in the Age of Neural Networks and Brain Computing.
[20] Rich Caruana,et al. Do Deep Nets Really Need to be Deep? , 2013, NIPS.
[21] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[22] Trevor Darrell,et al. Learning the Structure of Deep Convolutional Networks , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[23] R. J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[24] Jürgen Schmidhuber,et al. Recurrent policy gradients , 2010, Log. J. IGPL.
[25] J. Koenderink. Q… , 2014, Les noms officiels des communes de Wallonie, de Bruxelles-Capitale et de la communaute germanophone.
[26] Zhenghao Chen,et al. On Random Weights and Unsupervised Feature Learning , 2011, ICML.
[28] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[29] G. Griffin,et al. Caltech-256 Object Category Dataset , 2007 .
[30] Ramesh Raskar,et al. Designing Neural Network Architectures using Reinforcement Learning , 2016, ICLR.
[31] R. Venkatesh Babu,et al. Data-free Parameter Pruning for Deep Neural Networks , 2015, BMVC.
[32] Quoc V. Le,et al. Large-Scale Evolution of Image Classifiers , 2017, ICML.
[33] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[34] Nicolas Pinto,et al. Beyond simple features: A large-scale feature search approach to unconstrained face recognition , 2011, Face and Gesture 2011.
[35] Yurong Chen,et al. Dynamic Network Surgery for Efficient DNNs , 2016, NIPS.
[36] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[37] Zhen Li,et al. Blockout: Dynamic Model Selection for Hierarchical Deep Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[38] Teresa Bernarda Ludermir,et al. An Optimization Methodology for Neural Network Weights and Architectures , 2006, IEEE Transactions on Neural Networks.
[39] Risto Miikkulainen,et al. Evolving Neural Networks through Augmenting Topologies , 2002, Evolutionary Computation.
[40] Prabhat,et al. Scalable Bayesian Optimization Using Deep Neural Networks , 2015, ICML.