暂无分享,去创建一个
[1] Ping Liu,et al. Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[4] Nojun Kwak,et al. Position-based Scaled Gradient for Model Quantization and Pruning , 2020, NeurIPS.
[5] Siddharth Sigtia,et al. Hybrid Transformer/CTC Networks for Hardware Efficient Voice Triggering , 2020, INTERSPEECH.
[6] J. Tao,et al. Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition , 2020, INTERSPEECH.
[7] Yoshua Bengio,et al. Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation , 2013, ArXiv.
[8] Jangho Kim,et al. Paraphrasing Complex Network: Network Compression via Factor Transfer , 2018, NeurIPS.
[9] Sangdoo Yun,et al. A Comprehensive Overhaul of Feature Distillation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[10] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[11] Youguang Zhang,et al. A Multilevel Cell STT-MRAM-Based Computing In-Memory Accelerator for Binary Convolutional Neural Network , 2018, IEEE Transactions on Magnetics.
[12] Hieu Duy Nguyen,et al. Quantization Aware Training with Absolute-Cosine Regularization for Automatic Speech Recognition , 2020, INTERSPEECH.
[13] Sankaran Panchapagesan,et al. Model Compression Applied to Small-Footprint Keyword Spotting , 2016, INTERSPEECH.
[14] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, ArXiv.
[15] Jeffrey S. Vetter,et al. NVIDIA Tensor Core Programmability, Performance & Precision , 2018, 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW).
[16] Raghuraman Krishnamoorthi,et al. Quantizing deep convolutional networks for efficient inference: A whitepaper , 2018, ArXiv.
[17] Jinwon Lee,et al. QKD: Quantization-aware Knowledge Distillation , 2019, ArXiv.
[18] Shiv Vitaladevuni,et al. Accurate Detection of Wake Word Start and End Using a CNN , 2020, INTERSPEECH.
[19] Martin Jaggi,et al. Dynamic Model Pruning with Feedback , 2020, ICLR.
[20] Jangho Kim,et al. Feature Fusion for Online Mutual Knowledge Distillation , 2019, 2020 25th International Conference on Pattern Recognition (ICPR).
[21] Nojun Kwak,et al. Prototype-Based Personalized Pruning , 2021, ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[22] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[23] Jimmy J. Lin,et al. Deep Residual Learning for Small-Footprint Keyword Spotting , 2017, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[24] Georg Heigold,et al. Small-footprint keyword spotting using deep neural networks , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).