C ONTINUOUS P SEUDO -L ABELING FROM THE S TART REPRINT
暂无分享,去创建一个
[1] Takaaki Hori,et al. Momentum Pseudo-Labeling: Semi-Supervised ASR With Continuously Improving Pseudo-Labels , 2022, IEEE Journal of Selected Topics in Signal Processing.
[2] Z. Tu,et al. Semi-supervised Vision Transformers at Scale , 2022, NeurIPS.
[3] T. Shinozaki,et al. Censer: Curriculum Semi-supervised Learning for Speech Recognition Based on Self-supervised Pre-training , 2022, INTERSPEECH.
[4] Michael Auli,et al. data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language , 2022, ICML.
[5] Ross B. Girshick,et al. Masked Autoencoders Are Scalable Vision Learners , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Takaaki Hori,et al. Advancing Momentum Pseudo-Labeling with Conformer and Initialization Strategy , 2021, ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[7] Paul Michel,et al. Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative , 2021, ICLR.
[8] T. Shinozaki,et al. FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling , 2021, NeurIPS.
[9] Chung-Cheng Chiu,et al. w2v-BERT: Combining Contrastive Learning and Masked Language Modeling for Self-Supervised Speech Pre-Training , 2021, 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).
[10] Takaaki Hori,et al. Momentum Pseudo-Labeling for Semi-Supervised Speech Recognition , 2021, Interspeech.
[11] Geoffrey Zweig,et al. Kaizen: Continuously Improving Teacher Using Exponential Moving Average for Semi-Supervised Speech Recognition , 2021, 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).
[12] Gabriel Synnaeve,et al. CAPE: Encoding Relative Positions with Continuous Augmented Positional Embeddings , 2021, NeurIPS.
[13] Ruslan Salakhutdinov,et al. Hubert: How Much Can a Bad Teacher Benefit ASR Pre-Training? , 2021, ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[14] Julien Mairal,et al. Emerging Properties in Self-Supervised Vision Transformers , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[15] Alec Radford,et al. Zero-Shot Text-to-Image Generation , 2021, ICML.
[16] Ronan Collobert,et al. Joint Masked CPC And CTC Training For ASR , 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing.
[17] Ronan Collobert,et al. slimIPL: Language-Model-Free Iterative Pseudo-Labeling , 2020, Interspeech.
[18] Quoc V. Le,et al. Pushing the Limits of Semi-Supervised Learning for Automatic Speech Recognition , 2020, ArXiv.
[19] Abdel-rahman Mohamed,et al. wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations , 2020, NeurIPS.
[20] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[21] Quoc V. Le,et al. Improved Noisy Student Training for Automatic Speech Recognition , 2020, INTERSPEECH.
[22] Gabriel Synnaeve,et al. Iterative Pseudo-Labeling for Speech Recognition , 2020, INTERSPEECH.
[23] Geoffrey E. Hinton,et al. A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.
[24] Chao Wang,et al. Semi-supervised ASR by End-to-end Self-training , 2020, INTERSPEECH.
[25] David Berthelot,et al. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence , 2020, NeurIPS.
[26] Abdel-rahman Mohamed,et al. Libri-Light: A Benchmark for ASR with Limited or No Supervision , 2019, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[27] Edouard Grave,et al. Reducing Transformer Depth on Demand with Structured Dropout , 2019, ICLR.
[28] Awni Y. Hannun,et al. Self-Training for End-to-End Speech Recognition , 2019, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[29] Edouard Grave,et al. End-to-end ASR: from Supervised to Semi-Supervised Learning with Modern Architectures , 2019, ArXiv.
[30] David Berthelot,et al. MixMatch: A Holistic Approach to Semi-Supervised Learning , 2019, NeurIPS.
[31] Quoc V. Le,et al. SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition , 2019, INTERSPEECH.
[32] Ashish Vaswani,et al. Self-Attention with Relative Position Representations , 2018, NAACL.
[33] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[34] Timo Aila,et al. Temporal Ensembling for Semi-Supervised Learning , 2016, ICLR.
[35] Tolga Tasdizen,et al. Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning , 2016, NIPS.
[36] Sanjeev Khudanpur,et al. Librispeech: An ASR corpus based on public domain audio books , 2015, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[37] Dong-Hyun Lee,et al. Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks , 2013 .
[38] Yoram Singer,et al. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..
[39] Jürgen Schmidhuber,et al. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks , 2006, ICML.
[40] H. J. Scudder,et al. Probability of error of some adaptive pattern-recognition machines , 1965, IEEE Trans. Inf. Theory.