EuclidNets: An Alternative Operation for Efficient Inference of Deep Learning Models
暂无分享,去创建一个
[1] Hamza Ouarnoughi,et al. Hardware-Aware Neural Architecture Search: Survey and Taxonomy , 2021, IJCAI.
[2] Hamza Ouarnoughi,et al. A Comprehensive Survey on Hardware-Aware Neural Architecture Search , 2021, ArXiv.
[3] Chaojian Li,et al. ShiftAddNet: A Hardware-Inspired Deep Network , 2020, NeurIPS.
[4] Chang Xu,et al. Kernel Based Progressive Distillation for Adder Neural Networks , 2020, NeurIPS.
[5] D. Nikolaev,et al. ResNet-like Architecture with Low Hardware Requirements , 2020, 2020 25th International Conference on Pattern Recognition (ICPR).
[6] Xi Zhang,et al. Additive neural network for forest fire detection , 2020, Signal Image Video Process..
[7] Patrick Judd,et al. Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation , 2020, ArXiv.
[8] Chang Xu,et al. AdderNet: Do We Really Need Multiplications in Deep Learning? , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Randy C. Paffenroth,et al. Parameter Continuation Methods for the Optimization of Deep Neural Networks , 2019, 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA).
[10] Vladimir V. Arlazarov,et al. Bipolar morphological neural networks: convolution without multiplication , 2019, International Conference on Machine Vision.
[11] Wenrui Hao,et al. AN EFFICIENT HOMOTOPY TRAINING ALGORITHM FOR NEURAL NETWORKS , 2019 .
[12] Shumeet Baluja,et al. Table-Based Neural Units: Fully Quantizing Networks for Multiply-Free Inference , 2019, ArXiv.
[13] Quoc V. Le,et al. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks , 2019, ICML.
[14] Carole-Jean Wu,et al. Machine Learning at Facebook: Understanding Inference at the Edge , 2019, 2019 IEEE International Symposium on High Performance Computer Architecture (HPCA).
[15] Wenrui Hao,et al. A homotopy training algorithm for fully connected neural networks , 2019, Proceedings of the Royal Society A.
[16] Andrea Lodi,et al. Activation Adaptation in Neural Networks , 2019, ICPRAM.
[17] Soumendu Sundar Mukherjee,et al. Morphological Network: How Far Can We Go with Morphological Neurons? , 2019, British Machine Vision Conference.
[18] Bhabatosh Chanda,et al. Dense Morphological Network: An Universal Function Approximator , 2018, ArXiv.
[19] Shumeet Baluja,et al. No Multiplication? No Floating Point? No Problem! Training Networks for Efficient Inference , 2018, ArXiv.
[20] Yunhui Guo,et al. A Survey on Methods and Theories of Quantized Neural Networks , 2018, ArXiv.
[21] A. Enis Çetin,et al. Non-Euclidean Vector Product for Neural Networks , 2018, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[22] Michael Carbin,et al. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks , 2018, ICLR.
[23] Tao Zhang,et al. Model Compression and Acceleration for Deep Neural Networks: The Principles, Progress, and Challenges , 2018, IEEE Signal Processing Magazine.
[24] Tao Zhang,et al. A Survey of Model Compression and Acceleration for Deep Neural Networks , 2017, ArXiv.
[25] Junmo Kim,et al. A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Xiangyu Zhang,et al. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[27] George Papandreou,et al. Rethinking Atrous Convolution for Semantic Image Segmentation , 2017, ArXiv.
[28] A. Enis Çetin,et al. Multiplication free neural network for cancer stem cell detection in H-and-E stained liver images , 2017, Commercial + Scientific Sensing and Imaging.
[29] Bo Chen,et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.
[30] Farinaz Koushanfar,et al. LookNN: Neural network with no multiplication , 2017, Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017.
[31] Ross B. Girshick,et al. Mask R-CNN , 2017, 1703.06870.
[32] Philip S. Yu,et al. HashNet: Deep Learning to Hash by Continuation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[33] Forrest N. Iandola,et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.
[34] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, ArXiv.
[35] Hossein Mobahi,et al. Training Recurrent Neural Networks by Diffusion , 2016, ArXiv.
[36] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[37] Thomas Brox,et al. U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.
[38] Aysegul Uner,et al. Multiplication-free Neural Networks , 2015, 2015 23nd Signal Processing and Communications Applications Conference (SIU).
[39] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[40] Florent de Dinechin,et al. Large multipliers with fewer DSP blocks , 2009, 2009 International Conference on Field Programmable Logic and Applications.
[41] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[42] Jason Weston,et al. Curriculum learning , 2009, ICML '09.
[43] Leon O. Chua,et al. The comparative synapse: a multiplication free approach to neuro-fuzzy classifiers , 1999 .
[44] Peter Sussner,et al. An introduction to morphological neural networks , 1996, Proceedings of 13th International Conference on Pattern Recognition.
[45] Russell Reed,et al. Pruning algorithms-a survey , 1993, IEEE Trans. Neural Networks.
[46] L. Udpa,et al. Homotopy continuation methods for neural networks , 1991, 1991., IEEE International Sympoisum on Circuits and Systems.
[47] Gerhard X. Ritter,et al. Theory of morphological neural networks , 1990, Photonics West - Lasers and Applications in Science and Engineering.
[48] David A. Patterson,et al. Computer Architecture: A Quantitative Approach , 1969 .
[49] Sotirios A. Tsaftaris,et al. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 , 2015, Lecture Notes in Computer Science.
[50] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[51] Christof Paar,et al. Generalizations of the Karatsuba Algorithm for Efficient Implementations , 2006, IACR Cryptol. ePrint Arch..
[52] E. Allgower,et al. Introduction to Numerical Continuation Methods , 1987 .