ATNAS: Automatic Termination for Neural Architecture Search
暂无分享,去创建一个
[1] M. Yamada,et al. Nystrom Method for Accurate and Scalable Implicit Differentiation , 2023, AISTATS.
[2] Xiaojun Chang,et al. DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and Vision Transformers , 2022, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[3] Xing Sun,et al. Training-free Transformer Architecture Search , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Sagi Perel,et al. Neural architecture search for energy-efficient always-on audio machine learning , 2022, Neural Computing and Applications.
[5] Xiaojun Chang,et al. ZeroNAS: Differentiable Generative Adversarial Networks Search for Zero-Shot Learning , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[6] Frank Hutter,et al. NAS-Bench-x11 and the Power of Learning Curves , 2021, NeurIPS.
[7] Zhiming Ding,et al. Delve into the Performance Degradation of Differentiable Architecture Search , 2021, CIKM.
[8] K. H. Low,et al. NASI: Label- and Data-agnostic Neural Architecture Search at Initialization , 2021, ICLR.
[9] Cho-Jui Hsieh,et al. Rethinking Architecture Selection in Differentiable NAS , 2021, ICLR.
[10] Minghao Chen,et al. AutoFormer: Searching Transformers for Visual Recognition , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[11] C. Archambeau,et al. Automatic Termination for Hyperparameter Optimization , 2021, AutoML.
[12] Mingkui Tan,et al. Contrastive Neural Architecture Search with Neural Architecture Comparators , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Xinyu Gong,et al. Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective , 2021, ICLR.
[14] Mingkui Tan,et al. Towards Accurate and Compact Architectures via Neural Architecture Transformer , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[15] Nicholas D. Lane,et al. Zero-Cost Proxies for Lightweight NAS , 2021, ICLR.
[16] Shubhra Kanti Karmaker Santu,et al. AutoML to Date and Beyond: Challenges and Opportunities , 2020, ACM Comput. Surv..
[17] Junchi Yan,et al. DARTS-: Robustly Stepping out of Performance Collapse Without Indicators , 2020, ICLR.
[18] B. Gabrys,et al. NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size , 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[19] Yu Wang,et al. Evaluating Efficient Performance Estimators of Neural Architectures , 2020, NeurIPS.
[20] Mingkui Tan,et al. Breaking the Curse of Space Explosion: Towards Efficient NAS with Curriculum Search , 2020, ICML.
[21] Mark van der Wilk,et al. Speedy Performance Estimation for Neural Architecture Search , 2020, NeurIPS.
[22] Martin Wistuba,et al. Learning to Rank Learning Curves , 2020, ICML.
[23] Xiaojun Chang,et al. A Comprehensive Survey of Neural Architecture Search , 2020, ACM Comput. Surv..
[24] Hideitsu Hino,et al. Stopping criterion for active learning based on deterministic generalization bounds , 2020, AISTATS.
[25] Kalyanmoy Deb,et al. Neural Architecture Transfer , 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[26] Andrew McCallum,et al. Energy and Policy Considerations for Modern Deep Learning Research , 2020, AAAI.
[27] Haishan Ye,et al. MiLeNAS: Efficient Neural Architecture Search via Mixed-Level Reformulation , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[28] Yi Yang,et al. NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search , 2020, ICLR.
[29] Fabio Maria Carlucci,et al. NAS evaluation is frustratingly hard , 2019, ICLR.
[30] Xiangxiang Chu,et al. Fair DARTS: Eliminating Unfair Advantages in Differentiable Architecture Search , 2019, ECCV.
[31] F. Hutter,et al. Understanding and Robustifying Differentiable Architecture Search , 2019, ICLR.
[32] Wei Wang,et al. Understanding Architectures Learnt by Cell-based Neural Architecture Search , 2019, ICLR.
[33] Hao Chen,et al. Memory-Efficient Hierarchical Neural Architecture Search for Image Denoising , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[34] Chuang Gan,et al. Once for All: Train One Network and Specialize it for Efficient Deployment , 2019, ICLR.
[35] Kaiyong Zhao,et al. AutoML: A Survey of the State-of-the-Art , 2019, Knowl. Based Syst..
[36] Noah A. Smith,et al. Green AI , 2019, 1907.10597.
[37] Bo Zhang,et al. FairNAS: Rethinking Evaluation Fairness of Weight Sharing Neural Architecture Search , 2019, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[38] Manfred K. Warmuth,et al. Robust Bi-Tempered Logistic Loss Based on Bregman Divergences , 2019, NeurIPS.
[39] Kian Hsiang Low,et al. Bayesian Optimization Meets Bayesian Optimal Stopping , 2019, ICML.
[40] Qi Tian,et al. Progressive Differentiable Architecture Search: Bridging the Depth Gap Between Search and Evaluation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[41] Martin Jaggi,et al. Evaluating the Search Phase of Neural Architecture Search , 2019, ICLR.
[42] Ameet Talwalkar,et al. Random Search and Reproducibility for Neural Architecture Search , 2019, UAI.
[43] Michael Bloodgood,et al. Stopping Active Learning Based on Predicted Change of F Measure for Text Classification , 2019, 2019 IEEE 13th International Conference on Semantic Computing (ICSC).
[44] Min Sun,et al. InstaNAS: Instance-aware Neural Architecture Search , 2018, AAAI.
[45] Liang Lin,et al. SNAS: Stochastic Neural Architecture Search , 2018, ICLR.
[46] Song Han,et al. ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware , 2018, ICLR.
[47] Tie-Yan Liu,et al. Neural Architecture Optimization , 2018, NeurIPS.
[48] Aaron Klein,et al. BOHB: Robust and Efficient Hyperparameter Optimization at Scale , 2018, ICML.
[49] Quoc V. Le,et al. Understanding and Simplifying One-Shot Architecture Search , 2018, ICML.
[50] Yiming Yang,et al. DARTS: Differentiable Architecture Search , 2018, ICLR.
[51] Quoc V. Le,et al. Efficient Neural Architecture Search via Parameter Sharing , 2018, ICML.
[52] Wei Wu,et al. Practical Block-Wise Neural Network Architecture Generation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[53] Theodore Lim,et al. SMASH: One-Shot Model Architecture Search through HyperNetworks , 2017, ICLR.
[54] Vijay Vasudevan,et al. Learning Transferable Architectures for Scalable Image Recognition , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[55] Ramesh Raskar,et al. Accelerating Neural Architecture Search using Performance Prediction , 2017, ICLR.
[56] Aaron Klein,et al. Towards Automatically-Tuned Neural Networks , 2016, AutoML@ICML.
[57] Aaron Klein,et al. Learning Curve Prediction with Bayesian Neural Networks , 2016, ICLR.
[58] Kevin G. Jamieson,et al. Hyperband: Bandit-Based Configuration Evaluation for Hyperparameter Optimization , 2016, ICLR.
[59] Ramesh Raskar,et al. Designing Neural Network Architectures using Reinforcement Learning , 2016, ICLR.
[60] Quoc V. Le,et al. Neural Architecture Search with Reinforcement Learning , 2016, ICLR.
[61] Jakob Verbeek,et al. Convolutional Neural Fabrics , 2016, NIPS.
[62] Frank Hutter,et al. Speeding Up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves , 2015, IJCAI.
[63] Wojciech Zaremba,et al. An Empirical Exploration of Recurrent Network Architectures , 2015, ICML.
[64] Benjamin Van Roy,et al. An Information-Theoretic Analysis of Thompson Sampling , 2014, J. Mach. Learn. Res..
[65] David D. Cox,et al. Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures , 2013, ICML.
[66] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[67] K. Vijay-Shanker,et al. A Method for Stopping Active Learning Based on Stabilizing Predictions and the Need for User-Adjustable Stopping , 2009, CoNLL.
[68] Kenneth O. Stanley,et al. A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks , 2009, Artificial Life.
[69] Dario Floreano,et al. Neuroevolution: from architectures to learning , 2008, Evol. Intell..
[70] Risto Miikkulainen,et al. Evolving Neural Networks through Augmenting Topologies , 2002, Evolutionary Computation.
[71] Shun-ichi Amari,et al. Natural Gradient Works Efficiently in Learning , 1998, Neural Computation.
[72] Harold J. Kushner,et al. A New Method of Locating the Maximum Point of an Arbitrary Multipeak Curve in the Presence of Noise , 1964 .
[73] I. Takeuchi,et al. A stopping criterion for Bayesian optimization by the gap of expected minimum simple regrets , 2023, AISTATS.
[74] Samin Ishtiaq,et al. NAS-Bench-ASR: Reproducible Neural Architecture Search for Speech Recognition , 2021, ICLR.
[75] Cheng Li,et al. Regret for Expected Improvement over the Best-Observed Value and Stopping Condition , 2017, ACML.
[76] Lutz Prechelt,et al. Early Stopping-But When? , 1996, Neural Networks: Tricks of the Trade.
[77] Peter J. Angeline,et al. An evolutionary algorithm that constructs recurrent neural networks , 1994, IEEE Trans. Neural Networks.
[78] Yoshua Bengio,et al. Série Scientifique Scientific Series Inference for the Generalization Error , 2022 .