暂无分享,去创建一个
Zhihui Li | Xiaojun Chang | Xiaodan Liang | Bing Wang | Guangrun Wang | Changlin Li | Xiaojun Chang | Xiaodan Liang | Changlin Li | Guangrun Wang | Zhihui Li | Bing Wang
[1] Venkatesh Saligrama,et al. Adaptive Neural Networks for Fast Test-Time Prediction , 2017, ArXiv.
[2] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[3] Jiang Su,et al. EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning , 2020, ECCV.
[4] Bo Chen,et al. NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications , 2018, ECCV.
[5] Zeyi Huang,et al. Not All Images are Worth 16x16 Words: Dynamic Vision Transformers with Adaptive Sequence Length , 2021, ArXiv.
[6] Debadeepta Dey,et al. Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing , 2017, AAAI.
[7] Rich Caruana,et al. Do Deep Nets Really Need to be Deep? , 2013, NIPS.
[8] Xiaojun Chang,et al. Block-Wisely Supervised Neural Architecture Search With Knowledge Distillation , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Quoc V. Le,et al. Adversarial Examples Improve Image Recognition , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Song Han,et al. ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware , 2018, ICLR.
[11] Jiahui Yu,et al. AutoSlim: Towards One-Shot Architecture Search for Channel Numbers , 2019 .
[12] Lukasz Kaiser,et al. Universal Transformers , 2018, ICLR.
[13] Zhihui Li,et al. Dynamic Slimmable Network , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Daniel Guo,et al. Bootstrap Latent-Predictive Representations for Multitask Reinforcement Learning , 2020, ICML.
[15] Thomas S. Huang,et al. Universally Slimmable Networks and Improved Training Techniques , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[16] Xiangyu Zhang,et al. MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[17] Bo Chen,et al. Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[18] Yiming Yang,et al. DARTS: Differentiable Architecture Search , 2018, ICLR.
[19] Quoc V. Le,et al. Do Better ImageNet Models Transfer Better? , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[21] Ben Poole,et al. Categorical Reparameterization with Gumbel-Softmax , 2016, ICLR.
[22] Jimmy J. Lin,et al. DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference , 2020, ACL.
[23] Levent Sagun,et al. ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases , 2021, ICML.
[24] Timo Aila,et al. Temporal Ensembling for Semi-Supervised Learning , 2016, ICLR.
[25] Yuandong Tian,et al. FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Mehrtash Harandi,et al. Hierarchical Neural Architecture Search for Deep Stereo Matching , 2020, NeurIPS.
[27] Quoc V. Le,et al. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks , 2019, ICML.
[28] Furu Wei,et al. BERT Loses Patience: Fast and Robust Inference with Early Exit , 2020, NeurIPS.
[29] Chuang Gan,et al. Once for All: Train One Network and Specialize it for Efficient Deployment , 2019, ICLR.
[30] Yi Yang,et al. Gated Channel Transformation for Visual Recognition , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[31] Bo Chen,et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.
[32] Jiwen Lu,et al. DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification , 2021, NeurIPS.
[33] Ling Shao,et al. Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions , 2021, ArXiv.
[34] Xizhou Zhu,et al. Deformable Kernels: Adapting Effective Receptive Fields for Object Deformation , 2020, ICLR.
[35] Jianxin Wu,et al. ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[36] Harri Valpola,et al. Weight-averaged consistency targets improve semi-supervised deep learning results , 2017, ArXiv.
[37] Quoc V. Le,et al. CondConv: Conditionally Parameterized Convolutions for Efficient Inference , 2019, NeurIPS.
[38] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[39] Xiaojie Jin,et al. All Tokens Matter: Token Labeling for Training Better Vision Transformers , 2021, NeurIPS.
[40] Yee Whye Teh,et al. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables , 2016, ICLR.
[41] Xin Wang,et al. SkipNet: Learning Dynamic Routing in Convolutional Networks , 2017, ECCV.
[42] Shuicheng Yan,et al. Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet , 2021, ArXiv.
[43] Xiangyu Zhang,et al. Channel Pruning for Accelerating Very Deep Neural Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[44] Xiangyu Zhang,et al. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design , 2018, ECCV.
[45] Xiangyu Zhang,et al. Learning Dynamic Routing for Semantic Segmentation , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[46] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[47] Wei Liu,et al. SSD: Single Shot MultiBox Detector , 2015, ECCV.
[48] Edouard Grave,et al. Depth-Adaptive Transformer , 2020, ICLR.
[49] Minghao Chen,et al. AutoFormer: Searching Transformers for Visual Recognition , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[50] Yang Li,et al. You Look Twice: GaterNet for Dynamic Filter Selection in CNNs , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[51] Hui Xiong,et al. Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting , 2020, AAAI.
[52] Wencong Xiao,et al. SeerNet: Predicting Convolutional Neural Network Feature-Map Sparsity Through Low-Bit Quantization , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[53] Roy Schwartz,et al. The Right Tool for the Job: Matching Model and Instance Complexities , 2020, ACL.
[54] Matthieu Cord,et al. Training data-efficient image transformers & distillation through attention , 2020, ICML.
[55] Zhiru Zhang,et al. Channel Gating Neural Networks , 2018, NeurIPS.
[56] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[57] Cheng-Zhong Xu,et al. Dynamic Channel Pruning: Feature Boosting and Suppression , 2018, ICLR.
[58] Zhiqiang Shen,et al. Learning Efficient Convolutional Networks through Network Slimming , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[59] Liu Yang,et al. Sparse Sinkhorn Attention , 2020, ICML.
[60] Theodore Lim,et al. SMASH: One-Shot Model Architecture Search through HyperNetworks , 2017, ICLR.
[61] Noam Shazeer,et al. Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity , 2021, ArXiv.
[62] Luc Van Gool,et al. The Pascal Visual Object Classes (VOC) Challenge , 2010, International Journal of Computer Vision.
[63] Quoc V. Le,et al. GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism , 2018, ArXiv.
[64] Yi Yang,et al. Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks , 2018, IJCAI.
[65] Huiqi Li,et al. Overcoming Multi-Model Forgetting in One-Shot NAS With Diversity Maximization , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[66] Peng Zhou,et al. FastBERT: a Self-distilling BERT with Adaptive Inference Time , 2020, ACL.
[67] Kaiming He,et al. Designing Network Design Spaces , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[68] Chuan Zhou,et al. One-Shot Neural Architecture Search: Maximising Diversity to Overcome Catastrophic Forgetting , 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[69] Ning Xu,et al. Slimmable Neural Networks , 2018, ICLR.
[70] Zhihui Li,et al. A Comprehensive Survey of Neural Architecture Search: Challenges and Solutions , 2020, ArXiv.
[71] Jinwoo Shin,et al. Anytime Neural Prediction via Slicing Networks Vertically , 2018, ArXiv.
[72] Kilian Q. Weinberger,et al. Multi-Scale Dense Networks for Resource Efficient Image Classification , 2017, ICLR.
[73] Serge J. Belongie,et al. Convolutional Networks with Adaptive Inference Graphs , 2017, International Journal of Computer Vision.
[74] Youhei Akimoto,et al. Adaptive Stochastic Natural Gradient Method for One-Shot Neural Architecture Search , 2019, ICML.
[75] Kaiming He,et al. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour , 2017, ArXiv.
[76] Stephen Lin,et al. Deformable ConvNets V2: More Deformable, Better Results , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[77] Ruigang Yang,et al. Improved Techniques for Training Adaptive Deep Networks , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[78] Carlos Riquelme,et al. Scaling Vision with Sparse Mixture of Experts , 2021, NeurIPS.
[79] Yi Li,et al. Deformable Convolutional Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[80] Georg Heigold,et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale , 2021, ICLR.
[81] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[82] Matthijs Douze,et al. Deep Clustering for Unsupervised Learning of Visual Features , 2018, ECCV.
[83] Zheng Zhang,et al. Spatially Adaptive Inference with Stochastic Feature Sampling and Interpolation , 2020, ECCV.
[84] Kaiming He,et al. Group Normalization , 2018, ECCV.
[85] Michal Valko,et al. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning , 2020, NeurIPS.
[86] Qun Liu,et al. DynaBERT: Dynamic BERT with Adaptive Width and Depth , 2020, NeurIPS.
[87] Mingjie Sun,et al. Rethinking the Value of Network Pruning , 2018, ICLR.
[88] Pieter Abbeel,et al. Bottleneck Transformers for Visual Recognition , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[89] Mark Sandler,et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[90] Hanan Samet,et al. Pruning Filters for Efficient ConvNets , 2016, ICLR.
[91] Quoc V. Le,et al. Understanding and Simplifying One-Shot Architecture Search , 2018, ICML.
[92] Quoc V. Le,et al. AutoAugment: Learning Augmentation Strategies From Data , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[93] Gregory Shakhnarovich,et al. FractalNet: Ultra-Deep Neural Networks without Residuals , 2016, ICLR.
[94] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[95] Yi Yang,et al. More is Less: A More Complicated Network with Less Inference Complexity , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[96] Stephen Lin,et al. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[97] Nicolas Usunier,et al. End-to-End Object Detection with Transformers , 2020, ECCV.
[98] Xiangyu Zhang,et al. Single Path One-Shot Neural Architecture Search with Uniform Sampling , 2019, ECCV.
[99] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[100] Zhiqiang Shen,et al. MEAL: Multi-Model Ensemble via Adversarial Learning , 2018, AAAI.
[101] Xiaojun Chang,et al. BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[102] Zhiqiang Shen,et al. MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks , 2020, ArXiv.
[103] Le Yang,et al. Glance and Focus: a Dynamic Approach to Reducing Spatial Redundancy in Image Classification , 2020, NeurIPS.
[104] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[105] Fuqiang Zhou,et al. FSSD: Feature Fusion Single Shot Multibox Detector , 2017, ArXiv.
[106] Larry S. Davis,et al. BlockDrop: Dynamic Inference Paths in Residual Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[107] Bin Yang,et al. SBNet: Sparse Blocks Network for Fast Inference , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[108] Gao Huang,et al. Dynamic Neural Networks: A Survey , 2021, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[109] Quoc V. Le,et al. BigNAS: Scaling Up Neural Architecture Search with Big Single-Stage Models , 2020, ECCV.
[110] Huiqi Li,et al. Differentiable Neural Architecture Search in Equivalent Space with Exploration Enhancement , 2020, NeurIPS.