暂无分享,去创建一个
Yann Dauphin | Aaron C. Courville | Aaron Courville | Andrea Frome | Sara Hooker | Yann Dauphin | Andrea Frome | Sara Hooker | Y. Dauphin
[1] Philip H. S. Torr,et al. SNIP: Single-shot Network Pruning based on Connection Sensitivity , 2018, ICLR.
[2] Yurong Chen,et al. Dynamic Network Surgery for Efficient DNNs , 2016, NIPS.
[3] P. Goldman-Rakic,et al. Synaptic development of the cerebral cortex: implications for learning, memory, and mental illness. , 1994, Progress in brain research.
[4] Peter L. Bartlett,et al. Classification with a Reject Option using a Hinge Loss , 2008, J. Mach. Learn. Res..
[5] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[6] Max Welling,et al. Learning Sparse Neural Networks through L0 Regularization , 2017, ICLR.
[7] Kevin Gimpel,et al. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , 2016, ICLR.
[8] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[9] Christopher D. Manning,et al. Compression of Neural Machine Translation Models via Pruning , 2016, CoNLL.
[10] Gu-Yeon Wei,et al. Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).
[11] Mehryar Mohri,et al. Boosting with Abstention , 2016, NIPS.
[12] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[13] Claudio Gentile,et al. Online Learning with Abstention , 2017, ICML.
[14] Gregory J. Wolff,et al. Optimal Brain Surgeon and general network pruning , 1993, IEEE International Conference on Neural Networks.
[15] Charu C. Aggarwal,et al. Efficient Data Representation by Selecting Prototypes with Importance Weights , 2017, 2019 IEEE International Conference on Data Mining (ICDM).
[16] Lubomir M. Hadjiiski,et al. Evolutionary pruning of transfer learned deep convolutional neural network for breast cancer diagnosis in digital breast tomosynthesis , 2018, Physics in medicine and biology.
[17] Ran El-Yaniv,et al. Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations , 2016, J. Mach. Learn. Res..
[18] Erich Elsen,et al. The State of Sparsity in Deep Neural Networks , 2019, ArXiv.
[19] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[20] Dumitru Erhan,et al. A Benchmark for Interpretability Methods in Deep Neural Networks , 2018, NeurIPS.
[21] David E. Rumelhart,et al. Generalization by Weight-Elimination with Application to Forecasting , 1990, NIPS.
[22] Bo Chen,et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.
[23] Max Welling,et al. Soft Weight-Sharing for Neural Network Compression , 2017, ICLR.
[24] Oluwasanmi Koyejo,et al. Examples are not enough, learn to criticize! Criticism for Interpretability , 2016, NIPS.
[25] Kibok Lee,et al. Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples , 2017, ICLR.
[26] Yiran Chen,et al. Learning Structured Sparsity in Deep Neural Networks , 2016, NIPS.
[27] Jan Skoglund,et al. LPCNET: Improving Neural Speech Synthesis through Linear Prediction , 2018, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[28] Nikko Ström,et al. Sparse connection and pruning in large dynamic artificial neural networks , 1997, EUROSPEECH.
[29] Lucas Theis,et al. Faster gaze prediction with dense networks and Fisher pruning , 2018, ArXiv.
[30] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2018, ICLR.
[31] Kilian Q. Weinberger,et al. On Calibration of Modern Neural Networks , 2017, ICML.
[32] Sebastian Thrun,et al. Dermatologist-level classification of skin cancer with deep neural networks , 2017, Nature.
[33] Joost van de Weijer,et al. Metric Learning for Novelty and Anomaly Detection , 2018, BMVC.
[34] Ralph B. D'Agostino,et al. Goodness-of-Fit-Techniques , 2020 .
[35] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[36] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[37] Mehryar Mohri,et al. Learning with Rejection , 2016, ALT.
[38] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[39] Suyog Gupta,et al. To prune, or not to prune: exploring the efficacy of pruning for model compression , 2017, ICLR.
[40] Welch Bl. THE GENERALIZATION OF ‘STUDENT'S’ PROBLEM WHEN SEVERAL DIFFERENT POPULATION VARLANCES ARE INVOLVED , 1947 .
[41] D. Darling,et al. A Test of Goodness of Fit , 1954 .
[42] Saurabh Goyal,et al. Resource-efficient Machine Learning in 2 KB RAM for the Internet of Things , 2017, ICML.
[43] Pritish Narayanan,et al. Deep Learning with Limited Numerical Precision , 2015, ICML.
[44] C. Huber-Carol. Goodness-of-Fit Tests and Model Validity , 2012 .
[45] Babak Hassibi,et al. Second Order Derivatives for Network Pruning: Optimal Brain Surgeon , 1992, NIPS.
[46] Rich Caruana,et al. Case-Based Explanation for Artificial Neural Nets , 2000, ANNIMAB.
[47] David B. Paradice,et al. 3D deep learning for detecting pulmonary nodules in CT scans , 2018, J. Am. Medical Informatics Assoc..
[48] Dawn Song,et al. Natural Adversarial Examples , 2019, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[49] Erich Elsen,et al. Exploring Sparsity in Recurrent Neural Networks , 2017, ICLR.
[50] Erich Elsen,et al. Efficient Neural Audio Synthesis , 2018, ICML.
[51] Joel Emer,et al. Eyeriss: a spatial architecture for energy-efficient dataflow for convolutional neural networks , 2016, CARN.
[52] Pushmeet Kohli,et al. Memory Bounded Deep Convolutional Networks , 2014, ArXiv.
[53] Suzanne E. Welcome,et al. Longitudinal Mapping of Cortical Thickness and Brain Growth in Normal Children , 2022 .
[54] Nicholas Carlini,et al. Prototypical Examples in Deep Learning: Metrics, Characteristics, and Utility , 2018 .
[55] Forrest N. Iandola,et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.
[56] Charles Blundell,et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.
[57] Peter Stone,et al. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science , 2017, Nature Communications.
[58] R. Srikant,et al. Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks , 2017, ICLR.
[59] Geoffrey E. Hinton,et al. Simplifying Neural Networks by Soft Weight-Sharing , 1992, Neural Computation.
[60] Alex Kendall,et al. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? , 2017, NIPS.
[61] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[62] Zhiqiang Shen,et al. Learning Efficient Convolutional Networks through Network Slimming , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[63] B. J. Casey,et al. Structural and functional brain development and its relation to cognitive development , 2000, Biological Psychology.
[64] Yoshua Bengio,et al. Training deep neural networks with low precision multiplications , 2014 .
[65] Siegfried Wahl,et al. Leveraging uncertainty information from deep neural networks for disease detection , 2016, Scientific Reports.
[66] Yongdong Zhang,et al. Automated pulmonary nodule detection in CT images using deep convolutional neural networks , 2019, Pattern Recognit..
[67] Moustapha Cissé,et al. ConvNets and ImageNet Beyond Accuracy: Understanding Mistakes and Uncovering Biases , 2017, ECCV.