LayoutReader: Pre-training of Text and Layout for Reading Order Detection

Reading order detection is the cornerstone to understanding visually-rich documents (e.g., receipts and forms). Unfortunately, no existing work took advantage of advanced deep learning models because it is too laborious to annotate a large enough dataset. We observe that the reading order of WORD documents is embedded in their XML metadata; meanwhile, it is easy to convert WORD documents to PDFs or images. Therefore, in an automated manner, we construct ReadingBank, a benchmark dataset that contains reading order, text, and layout information for 500,000 document images covering a wide spectrum of document types. This first-ever large-scale dataset unleashes the power of deep neural networks for reading order detection. Specifically, our proposed LayoutReader captures the text and layout information for reading order prediction using the seq2seq model. It performs almost perfectly in reading order detection and significantly improves both open-source and commercial OCR engines in ordering text lines in their results in our experiments. We will release the dataset and model at https:// aka.ms/layoutreader.

[1]  Marco Aiello,et al.  Bidimensional relations for reading order detection , 2003 .

[2]  Stefano Ferilli,et al.  Abstract argumentation for reading order detection , 2014, DocEng '14.

[3]  Jiajun Bu,et al.  An End-to-End OCR Text Re-organization Sequence Learning for Rich-Text Detail Image Comprehension , 2020, ECCV.

[4]  Michelangelo Ceci,et al.  Machine Learning for Reading Order Detection in Document Image Understanding , 2008, Machine Learning in Document Analysis and Recognition.

[5]  Antonio Jimeno-Yepes,et al.  PubLayNet: Largest Dataset Ever for Document Layout Analysis , 2019, 2019 International Conference on Document Analysis and Recognition (ICDAR).

[6]  Apostolos Antonacopoulos,et al.  The Significance of Reading Order in Document Recognition and Its Evaluation , 2013, 2013 12th International Conference on Document Analysis and Recognition.

[7]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[8]  Michelangelo Ceci,et al.  Learning to Order: A Relational Approach , 2007, MCD.

[9]  Xiaodong Liu,et al.  Unified Language Model Pre-training for Natural Language Understanding and Generation , 2019, NeurIPS.

[10]  Zhoujun Li,et al.  TableBank: Table Benchmark for Image-based Table Detection and Recognition , 2019, LREC.

[11]  Waleed Ammar,et al.  Extracting Scientific Figures with Distantly Supervised Neural Networks , 2018, JCDL.

[12]  Salim Roukos,et al.  Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.

[13]  Michelangelo Ceci,et al.  A Data Mining Approach to Reading Order Detection , 2007, Ninth International Conference on Document Analysis and Recognition (ICDAR 2007).

[14]  Furu Wei,et al.  LayoutLM: Pre-training of Text and Layout for Document Image Understanding , 2019, KDD.