暂无分享,去创建一个
[1] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[2] Alexander Wong,et al. NetScore: Towards Universal Metrics for Large-Scale Performance Analysis of Deep Neural Networks for Practical On-Device Edge Usage , 2018, ICIAR.
[3] Jimmy J. Lin,et al. Deep Residual Learning for Small-Footprint Keyword Spotting , 2017, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[4] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[5] Izhar Wallach,et al. AtomNet: A Deep Convolutional Neural Network for Bioactivity Prediction in Structure-based Drug Discovery , 2015, ArXiv.
[6] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[7] In-So Kweon,et al. CBAM: Convolutional Block Attention Module , 2018, ECCV.
[8] Alexander Wong,et al. FermiNets: Learning generative machines to generate efficient neural networks via generative synthesis , 2018, ArXiv.
[9] David Gregg,et al. Performance-Oriented Neural Architecture Search , 2019, 2019 International Conference on High Performance Computing & Simulation (HPCS).
[10] Bo Chen,et al. MnasNet: Platform-Aware Neural Architecture Search for Mobile , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Pietro Liò,et al. Graph Attention Networks , 2017, ICLR.
[12] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[13] Chris Eliasmith,et al. Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks , 2019, NeurIPS.
[14] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[15] Song Han,et al. ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware , 2018, ICLR.
[16] Han Zhang,et al. Self-Attention Generative Adversarial Networks , 2018, ICML.
[17] Vijay Vasudevan,et al. Learning Transferable Architectures for Scalable Image Recognition , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[18] Alexander Wong,et al. NetScore: Towards Universal Metrics for Large-scale Performance Analysis of Deep Neural Networks for Practical Usage , 2018, ArXiv.
[19] Erich Elsen,et al. Deep Speech: Scaling up end-to-end speech recognition , 2014, ArXiv.
[20] Peter Blouw,et al. Hardware Aware Training for Efficient Keyword Spotting on General Purpose and Specialized Hardware , 2020, ArXiv.
[21] Tara N. Sainath,et al. Convolutional neural networks for small-footprint keyword spotting , 2015, INTERSPEECH.
[22] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Artem Cherkasov,et al. All SMILES Variational Autoencoder , 2019, 1905.13343.
[24] Jimmy J. Lin,et al. Honk: A PyTorch Reimplementation of Convolutional Neural Networks for Keyword Spotting , 2017, ArXiv.
[25] Enhua Wu,et al. Squeeze-and-Excitation Networks , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[26] Dawn Song,et al. Natural Adversarial Examples , 2019, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Pete Warden,et al. Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition , 2018, ArXiv.
[28] Cho-Jui Hsieh,et al. On the Robustness of Self-Attentive Models , 2019, ACL.
[29] Vikrant Singh Tomar,et al. Efficient keyword spotting using time delay neural networks , 2018, INTERSPEECH.
[30] Geoffrey Zweig,et al. Achieving Human Parity in Conversational Speech Recognition , 2016, ArXiv.