暂无分享,去创建一个
Shinsuke Sakai | Sei Ueno | Tatsuya Kawahara | Masato Mimura | Kohei Matsuura | S. Sakai | T. Kawahara | M. Mimura | Kohei Matsuura | Sei Ueno
[1] Anders Krogh,et al. A Simple Weight Decay Can Improve Generalization , 1991, NIPS.
[2] Janet M. Baker,et al. The Design for the Wall Street Journal-based CSR Corpus , 1992, HLT.
[3] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[4] A. Ichikawa,et al. An Analysis of Turn-Taking and Backchannels Based on Prosodic and Syntactic Features in Japanese Map Task Dialogs , 1998, Language and speech.
[5] Shuichi Itahashi,et al. JNAS: Japanese speech corpus for large vocabulary continuous speech recognition research , 1999 .
[6] Adam L. Kern,et al. New directions in the study of Meiji Japan , 1999 .
[7] Jürgen Schmidhuber,et al. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks , 2006, ICML.
[8] Mike Schuster,et al. Japanese and Korean voice search , 2012, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[9] Navdeep Jaitly,et al. Towards End-To-End Speech Recognition with Recurrent Neural Networks , 2014, ICML.
[10] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[11] Yoshua Bengio,et al. End-to-end Continuous Speech Recognition using Attention-based Recurrent NN: First Results , 2014, ArXiv.
[12] C. Moseley,et al. Atlas Of The World’s Languages In Danger , 2015 .
[13] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[14] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[15] Quoc V. Le,et al. Listen, Attend and Spell , 2015, ArXiv.
[16] Rico Sennrich,et al. Neural Machine Translation of Rare Words with Subword Units , 2015, ACL.
[17] Yoshua Bengio,et al. End-to-end attention-based large vocabulary speech recognition , 2015, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[18] Yi He,et al. Frame Stacking and Retaining for Recurrent Neural Network Acoustic Model , 2017, ArXiv.
[19] Shinji Watanabe,et al. Joint CTC-attention based end-to-end speech recognition using multi-task learning , 2016, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[20] Akiko Aizawa,et al. Universal Dependencies for Ainu , 2018, LREC.
[21] Tara N. Sainath,et al. Multilingual Speech Recognition with a Single End-to-End Model , 2017, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[22] Shinji Watanabe,et al. Multilingual Sequence-to-Sequence Speech Recognition: Architecture, Transfer Learning, and Language Modeling , 2018, 2018 IEEE Spoken Language Technology Workshop (SLT).
[23] Yifan Gong,et al. Advancing Acoustic-to-Word CTC Model , 2018, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[24] Yiming Wang,et al. Semi-Orthogonal Low-Rank Matrix Factorization for Deep Neural Networks , 2018, INTERSPEECH.
[25] Taku Kudo,et al. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing , 2018, EMNLP.
[26] David Chiang,et al. Tied Multitask Learning for Neural Speech Translation , 2018, NAACL.
[27] Tara N. Sainath,et al. State-of-the-Art Speech Recognition with Sequence-to-Sequence Models , 2017, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[28] Florian Metze,et al. Hierarchical Multi Task Learning With CTC , 2018, ArXiv.
[29] Taku Kudo,et al. Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates , 2018, ACL.
[30] Florian Metze,et al. Hierarchical Multitask Learning With CTC , 2018, 2018 IEEE Spoken Language Technology Workshop (SLT).
[31] Patrick Nguyen,et al. Model Unit Exploration for Sequence-to-Sequence Speech Recognition , 2019, ArXiv.
[32] Hermann Ney,et al. RWTH ASR Systems for LibriSpeech: Hybrid vs Attention - w/o Data Augmentation , 2019, INTERSPEECH.
[33] Michael I. Jordan,et al. How Does Learning Rate Decay Help Modern Neural Networks , 2019 .
[34] Michal Ptaszynski,et al. Applying Support Vector Machines to POS tagging of the Ainu Language , 2019, Proceedings of the Workshop on Computational Methods for Endangered Languages.
[35] Kyu J. Han,et al. State-of-the-Art Speech Recognition Using Multi-Stream Self-Attention with Dilated 1D Convolutions , 2019, 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).