Optimizing the Simplicial-Map Neural Network Architecture
暂无分享,去创建一个
Jónathan Heras | Eduardo Paluzo-Hidalgo | Rocio Gonzalez-Diaz | Miguel A Gutiérrez-Naranjo | M. A. Gutiérrez-Naranjo | Eduardo Paluzo-Hidalgo | Jónathan Heras | Rocio Gonzalez-Diaz
[1] Gintare Karolina Dziugaite,et al. Pruning Neural Networks at Initialization: Why are We Missing the Mark? , 2020, ArXiv.
[2] Mariette Yvinec,et al. Geometric and Topological Inference , 2018 .
[3] Alexander Kozlov,et al. Post-training deep neural network pruning via layer-wise calibration , 2021, 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW).
[4] Russell Reed,et al. Pruning algorithms-a survey , 1993, IEEE Trans. Neural Networks.
[5] Eunho Yang,et al. Stochastic Subset Selection , 2020, ArXiv.
[6] Steve R. Gunn,et al. Design and Analysis of the NIPS2003 Challenge , 2006, Feature Extraction.
[7] Silvio Savarese,et al. Active Learning for Convolutional Neural Networks: A Core-Set Approach , 2017, ICLR.
[8] Somesh Jha,et al. Analyzing the Robustness of Nearest Neighbors to Adversarial Examples , 2017, ICML.
[9] Kurt Hornik,et al. Approximation capabilities of multilayer feedforward networks , 1991, Neural Networks.
[10] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[11] Gennady Pekhimenko,et al. Computational Performance Predictions for Deep Neural Network Training: A Runtime-Based Approach , 2021, ArXiv.
[12] Rocío González-Díaz,et al. Representative datasets for neural networks , 2018, Electron. Notes Discret. Math..
[13] Jónathan Heras,et al. Simplicial-Map Neural Networks Robust to Adversarial Examples , 2021, Mathematics.
[14] Leland McInnes,et al. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction , 2018, ArXiv.
[15] Alexander Kolesnikov,et al. Scaling Vision Transformers , 2021, ArXiv.
[16] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[17] Rudolf Mathar,et al. On the Robustness of Support Vector Machines against Adversarial Examples , 2019, 2019 13th International Conference on Signal Processing and Communication Systems (ICSPCS).
[18] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[19] Matthias Schonlau,et al. Soft-Label Dataset Distillation and Text Dataset Distillation , 2021, 2021 International Joint Conference on Neural Networks (IJCNN).
[20] Erich Elsen,et al. The State of Sparsity in Deep Neural Networks , 2019, ArXiv.
[21] Jinxi Zhao,et al. Rethinking Network Pruning – under the Pre-train and Fine-tune Paradigm , 2021, NAACL.
[22] M. A. Gutiérrez-Naranjo,et al. Two-hidden-layer Feedforward Neural Networks are Universal Approximators: A Constructive Approach , 2019, Neural networks : the official journal of the International Neural Network Society.
[23] Andrew McCallum,et al. Energy and Policy Considerations for Deep Learning in NLP , 2019, ACL.
[24] David J. Schwab,et al. The Early Phase of Neural Network Training , 2020, ICLR.