暂无分享,去创建一个
[1] Anders Krogh,et al. A Simple Weight Decay Can Improve Generalization , 1991, NIPS.
[2] L. Cooper,et al. When Networks Disagree: Ensemble Methods for Hybrid Neural Networks , 1992 .
[3] Boris Polyak,et al. Acceleration of stochastic approximation by averaging , 1992 .
[4] David H. Wolpert,et al. Stacked generalization , 1992, Neural Networks.
[5] Yoav Freund,et al. Experiments with a New Boosting Algorithm , 1996, ICML.
[6] D. Opitz,et al. Popular Ensemble Methods: An Empirical Study , 1999, J. Artif. Intell. Res..
[7] Kenneth O. Stanley,et al. Exploiting Open-Endedness to Solve Problems Through the Search for Novelty , 2008, ALIFE.
[8] Luc Van Gool,et al. The Pascal Visual Object Classes (VOC) Challenge , 2010, International Journal of Computer Vision.
[9] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[10] Forrest N. Iandola,et al. DenseNet: Implementing Efficient ConvNet Descriptor Pyramids , 2014, ArXiv.
[11] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[12] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Chen Sun,et al. Revisiting Unreasonable Effectiveness of Data in Deep Learning Era , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[15] Kilian Q. Weinberger,et al. On Calibration of Modern Neural Networks , 2017, ICML.
[16] Alex Kendall,et al. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? , 2017, NIPS.
[17] Trevor Darrell,et al. Deep Layer Aggregation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[18] Elad Hoffer,et al. Fix your classifier: the marginal value of training the last weight layer , 2018, ICLR.
[19] Andrew Gordon Wilson,et al. Averaging Weights Leads to Wider Optima and Better Generalization , 2018, UAI.
[20] Yao Zhao,et al. Adversarial Attacks and Defences Competition , 2018, ArXiv.
[21] Mark Sandler,et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[22] Quoc V. Le,et al. DropBlock: A regularization method for convolutional networks , 2018, NeurIPS.
[23] Quoc V. Le,et al. AutoAugment: Learning Augmentation Policies from Data , 2018, ArXiv.
[24] Matthias Bethge,et al. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness , 2018, ICLR.
[25] Yanzhao Wu,et al. Deep Neural Network Ensembles Against Deception: Ensemble Diversity, Accuracy and Robustness , 2019, 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems (MASS).
[26] Aleksander Madry,et al. Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.
[27] Horia Mania,et al. Model Similarity Mitigates Test Set Overuse , 2019, NeurIPS.
[28] Benjamin Recht,et al. Do ImageNet Classifiers Generalize to ImageNet? , 2019, ICML.
[29] Quoc V. Le,et al. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks , 2019, ICML.
[30] Sergey Levine,et al. Diversity is All You Need: Learning Skills without a Reward Function , 2018, ICLR.
[31] Balaji Lakshminarayanan,et al. Deep Ensembles: A Loss Landscape Perspective , 2019, ArXiv.
[32] Quoc V. Le,et al. Do Better ImageNet Models Transfer Better? , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[33] Jian Yang,et al. Selective Kernel Networks , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[34] S. Gelly,et al. Big Transfer (BiT): General Visual Representation Learning , 2019, ECCV.
[35] Felix A. Wichmann,et al. Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency , 2020, NeurIPS.
[36] M. Mozer,et al. Mitigating bias in calibration error estimation , 2020, AISTATS.
[37] Geoffrey E. Hinton,et al. A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.
[38] Jasper Snoek,et al. Hyperparameter Ensembles for Robustness and Uncertainty Quantification , 2020, NeurIPS.
[39] Chris C. Holmes,et al. Neural Ensemble Search for Uncertainty Estimation and Dataset Shift , 2020, NeurIPS.
[40] Behnam Neyshabur,et al. What is being transferred in transfer learning? , 2020, NeurIPS.
[41] Quoc V. Le,et al. Randaugment: Practical automated data augmentation with a reduced search space , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[42] Alexander D'Amour,et al. Underspecification Presents Challenges for Credibility in Modern Machine Learning , 2020, J. Mach. Learn. Res..
[43] CenterMask: Real-Time Anchor-Free Instance Segmentation , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[44] Dustin Tran,et al. BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning , 2020, ICLR.
[45] Enhua Wu,et al. Squeeze-and-Excitation Networks , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[46] Diversity inducing Information Bottleneck in Model Ensembles , 2020 .
[47] Dan Busbridge,et al. Do Self-Supervised and Supervised Methods Learn Similar Visual Representations? , 2021, ArXiv.
[48] Quoc V. Le,et al. Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision , 2021, ICML.
[49] S. Gelly,et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale , 2020, ICLR.
[50] Quoc V. Le,et al. Meta Pseudo Labels , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[51] Yang Zhao,et al. Deep High-Resolution Representation Learning for Visual Recognition , 2019, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[52] Behnam Neyshabur,et al. The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning , 2021, Trans. Mach. Learn. Res..
[53] Ilya Sutskever,et al. Learning Transferable Visual Models From Natural Language Supervision , 2021, ICML.
[54] Matthieu Cord,et al. Training data-efficient image transformers & distillation through attention , 2020, ICML.
[55] Behnam Neyshabur,et al. Exploring the Limits of Large Scale Pre-training , 2021, ICLR.
[56] Alexander Kolesnikov,et al. Scaling Vision Transformers , 2021, ArXiv.
[57] Abhijit Guha Roy,et al. Does Your Dermatology Classifier Know What It Doesn't Know? Detecting the Long-Tail of Unseen Conditions , 2021, Medical Image Anal..
[58] Ashutosh Kumar Singh,et al. Big Transfer (BiT): General Visual Representation Learning , 2022 .