Training LSTM-RNN with Imperfect Transcription: Limitations and Outcomes
暂无分享,去创建一个
[1] Rob Fergus,et al. Learning from Noisy Labels with Deep Neural Networks , 2014, ICLR.
[2] Jürgen Schmidhuber,et al. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks , 2006, ICML.
[3] Nir Shavit,et al. Deep Learning is Robust to Massive Label Noise , 2017, ArXiv.
[4] Andreas Dengel,et al. OCRoRACT: A Sequence Learning OCR System Trained on Isolated Characters , 2016, 2016 12th IAPR Workshop on Document Analysis Systems (DAS).
[5] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[6] Klaus U. Schulz,et al. Automatic quality evaluation and (semi-) automatic improvement of mixed models for OCR on historical documents , 2016, ArXiv.
[7] Jürgen Schmidhuber,et al. Learning to Forget: Continual Prediction with LSTM , 2000, Neural Computation.
[8] Didier Stricker,et al. A comparison of 1D and 2D LSTM architectures for the recognition of handwritten Arabic , 2015, Electronic Imaging.
[9] Andreas Dengel,et al. anyOCR: A sequence learning based OCR system for unlabeled historical documents , 2016, 2016 23rd International Conference on Pattern Recognition (ICPR).
[10] Jürgen Schmidhuber,et al. Biologically Plausible Speech Recognition with LSTM Neural Nets , 2004, BioADIT.
[11] Thomas M. Breuel,et al. High-Performance OCR for Printed English and Fraktur Using LSTM Networks , 2013, 2013 12th International Conference on Document Analysis and Recognition.