Why Should We Add Early Exits to Neural Networks?
暂无分享,去创建一个
[1] Enzo Baccarelli,et al. Optimized training and scalable implementation of Conditional Deep Neural Networks with early exits for Fog-supported IoT applications , 2020, Inf. Sci..
[2] Enzo Baccarelli,et al. Differentiable Branching In Deep Networks for Fast Inference , 2020, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[3] Yochai Blau,et al. Direct Validation of the Information Bottleneck Principle for Deep Nets , 2019, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW).
[4] Kaizhu Huang,et al. Automatic Design of Deep Networks with Neural Blocks , 2019, Cognitive Computation.
[5] Hung-Hsuan Chen,et al. Associated Learning: Decomposing End-to-End Backpropagation Based on Autoencoders and Target Propagation , 2019, Neural Computation.
[6] Bastiaan S. Veeling,et al. Putting An End to End-to-End: Gradient-Isolated Learning of Representations , 2019, NeurIPS.
[7] Xu Chen,et al. Edge Intelligence: Paving the Last Mile of Artificial Intelligence With Edge Computing , 2019, Proceedings of the IEEE.
[8] Enzo Baccarelli,et al. EcoMobiFog–Design and Dynamic Optimization of a 5G Mobile-Fog-Cloud Multi-Tier Ecosystem for the Real-Time Distributed Execution of Stream Applications , 2019, IEEE Access.
[9] Michael Eickenberg,et al. Decoupled Greedy Learning of CNNs , 2019, ICML.
[10] Arild Nøkland,et al. Training Neural Networks with Local Error Signals , 2019, ICML.
[11] Edouard Oyallon,et al. Greedy Layerwise Learning Can Scale to ImageNet , 2018, ICML.
[12] Mehdi Bennis,et al. Wireless Network Intelligence at the Edge , 2018, Proceedings of the IEEE.
[13] Tudor Dumitras,et al. Shallow-Deep Networks: Understanding and Mitigating Network Overthinking , 2018, ICML.
[14] Yochai Blau,et al. The effectiveness of layer-by-layer training using the information bottleneck principle , 2018 .
[15] Edouard Oyallon,et al. Shallow Learning For Deep Networks , 2018 .
[16] Marco Gori,et al. Backpropagation and Biological Plausibility , 2018, ArXiv.
[17] Diana Marculescu,et al. Designing Adaptive Neural Networks for Energy-Constrained Image Classification , 2018, 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).
[18] David Duvenaud,et al. Neural Ordinary Differential Equations , 2018, NeurIPS.
[19] Hongyang Zhang,et al. Deep Neural Networks with Multi-Branch Architectures Are Less Non-Convex , 2018, ArXiv.
[20] Paulo Valente Klaine,et al. Distributed Drone Base Station Positioning for Emergency Cellular Networks Using Reinforcement Learning , 2018, Cognitive Computation.
[21] Shai Shalev-Shwartz,et al. A Provably Correct Algorithm for Deep Learning that Actually Works , 2018, ArXiv.
[22] Jonathon S. Hare,et al. Deep Cascade Learning , 2018, IEEE Transactions on Neural Networks and Learning Systems.
[23] Rana Ali Amjad,et al. Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle , 2018, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[24] Taiji Suzuki,et al. Functional Gradient Boosting based on Residual Network Perception , 2018, ICML.
[25] David D. Cox,et al. On the information bottleneck theory of deep learning , 2018, ICLR.
[26] Quoc V. Le,et al. Efficient Neural Architecture Search via Parameter Sharing , 2018, ICML.
[27] Gert Cauwenberghs,et al. Deep Supervised Learning Using Local Errors , 2017, Front. Neurosci..
[28] Yoshua Bengio,et al. Three Factors Influencing Minima in SGD , 2017, ArXiv.
[29] Shih-Chieh Chang,et al. A Dynamic Deep Neural Network Design for Efficient Workload Allocation in Edge Computing , 2017, 2017 IEEE International Conference on Computer Design (ICCD).
[30] Jaakko Lehtinen,et al. Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.
[31] Yang Liu,et al. Energy-efficient Amortized Inference with Cascaded Deep Classifiers , 2017, IJCAI.
[32] Jian-Huang Lai,et al. Deep Growing Learning , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[33] Tong Tong,et al. Image Super-Resolution Using Dense Skip Connections , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[34] Steven Bohez,et al. The cascading neural network: building the Internet of Smart Things , 2017, Knowledge and Information Systems.
[35] John Langford,et al. Learning Deep ResNet Blocks Sequentially using Boosting Theory , 2017, ICML.
[36] Theodore Lim,et al. FreezeOut: Accelerate Training by Progressively Freezing Layers , 2017, NIPS 2017.
[37] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[38] Jeffrey Humpherys,et al. Forward Thinking: Building and Training Neural Networks One Layer at a Time , 2017, ArXiv.
[39] H. T. Kung,et al. Distributed Deep Neural Networks Over the Cloud, the Edge and End Devices , 2017, 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS).
[40] Xin Wang,et al. IDK Cascades: Fast Deep Learning by Learning not to Overthink , 2017, UAI.
[41] Wenguan Wang,et al. Deep Visual Attention Prediction , 2017, IEEE Transactions on Image Processing.
[42] Enzo Baccarelli,et al. Fog of Everything: Energy-Efficient Networked Computing Architectures, Research Challenges, and a Case Study , 2017, IEEE Access.
[43] Venkatesh Saligrama,et al. Adaptive Classification for Prediction Under a Budget , 2017, NIPS.
[44] Narendra Ahuja,et al. Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[45] Mandar Kulkarni,et al. Layer-wise training of deep networks using kernel similarity , 2017, ArXiv.
[46] Max Welling,et al. Modeling Relational Data with Graph Convolutional Networks , 2017, ESWC.
[47] Naftali Tishby,et al. Opening the Black Box of Deep Neural Networks via Information , 2017, ArXiv.
[48] Venkatesh Saligrama,et al. Adaptive Neural Networks for Efficient Inference , 2017, ICML.
[49] Zdenek Becvar,et al. Mobile Edge Computing: A Survey on Architecture and Computation Offloading , 2017, IEEE Communications Surveys & Tutorials.
[50] Kaushik Roy,et al. Energy-Efficient and Improved Image Recognition with Conditional Deep Learning , 2017, ACM J. Emerg. Technol. Comput. Syst..
[51] Pierre Baldi,et al. Learning in the machine: Random backpropagation and the deep learning channel , 2016, Artif. Intell..
[52] H. T. Kung,et al. BranchyNet: Fast inference via early exiting from deep neural networks , 2016, 2016 23rd International Conference on Pattern Recognition (ICPR).
[53] Chao Zhang,et al. Hard-Aware Deeply Cascaded Embedding , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[54] Colin J. Akerman,et al. Random synaptic feedback weights support error backpropagation for deep learning , 2016, Nature Communications.
[55] Simone Scardapane,et al. A Framework for Parallel and Distributed Training of Neural Networks , 2016, Neural Networks.
[56] Arild Nøkland,et al. Direct Feedback Alignment Provides Learning in Deep Neural Networks , 2016, NIPS.
[57] Rogério Schmidt Feris,et al. A Unified Multi-scale Deep Convolutional Neural Network for Fast Object Detection , 2016, ECCV.
[58] Mehryar Mohri,et al. AdaNet: Adaptive Structural Learning of Artificial Neural Networks , 2016, ICML.
[59] Junwei Han,et al. DHSNet: Deep Hierarchical Saliency Network for Salient Object Detection , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[60] Yichen Shen,et al. Efficient plasmonic emission by the quantum Čerenkov effect from hot carriers in graphene , 2016, Nature Communications.
[61] Bernt Schiele,et al. DeeperCut: A Deeper, Stronger, and Faster Multi-person Pose Estimation Model , 2016, ECCV.
[62] Ali Farhadi,et al. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016, ECCV.
[63] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, NIPS.
[64] Kyoung Mu Lee,et al. Deeply-Recursive Convolutional Network for Image Super-Resolution , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[65] Charles Elkan,et al. Learning to Diagnose with LSTM Recurrent Neural Networks , 2015, ICLR.
[66] Yi Yang,et al. Attention to Scale: Scale-Aware Semantic Image Segmentation , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[67] Kaushik Roy,et al. Conditional Deep Learning for energy-efficient and enhanced pattern recognition , 2015, 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE).
[68] Jie Liu,et al. Scalable-effort classifiers for energy-efficient machine learning , 2015, DAC.
[69] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[70] Yoshua Bengio,et al. Difference Target Propagation , 2014, ECML/PKDD.
[71] Xiaogang Wang,et al. Deeply learned face representations are sparse, selective, and robust , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[72] Sergio Barbarossa,et al. Communicating While Computing: Distributed mobile cloud computing over 5G heterogeneous networks , 2014, IEEE Signal Processing Magazine.
[73] Zhuowen Tu,et al. Deeply-Supervised Nets , 2014, AISTATS.
[74] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[75] Mazliza Othman,et al. A Survey of Mobile Cloud Computing Application Models , 2014, IEEE Communications Surveys & Tutorials.
[76] Yoshua Bengio,et al. Exploring Strategies for Training Deep Neural Networks , 2009, J. Mach. Learn. Res..
[77] Yoshua Bengio,et al. Greedy Layer-Wise Training of Deep Networks , 2006, NIPS.
[78] Rich Caruana,et al. Model compression , 2006, KDD '06.
[79] Alekseĭ Grigorʹevich Ivakhnenko,et al. CYBERNETIC PREDICTING DEVICES , 1966 .
[80] Jane M. Zakovorotnaya,et al. IEEE Transactions on Neural Networks and Learning Systems , 2019 .
[81] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, ArXiv.
[82] Geoffrey E. Hinton,et al. Deep Learning , 2015 .
[83] Charles Sheppard,et al. References References , 1994 .