DeepDeSRT: Deep Learning for Detection and Structure Recognition of Tables in Document Images
This paper presents a novel end-to-end system for table understanding in document images called DeepDeSRT. In particular, the contribution of DeepDeSRT is two-fold. First, it presents a deep learning-based solution for table detection in document images. Secondly, it proposes a novel deep learning-based approach for table structure recognition, i.e. identifying rows, columns, and cell positions in the detected tables. In contrast to existing rule-based methods, which rely on heuristics or additional PDF metadata (like, for example, print instructions, character bounding boxes, or line segments), the presented system is data-driven and does not need any heuristics or metadata to detect as well as to recognize tabular structures in document images. Furthermore, in contrast to most existing table detection and structure recognition methods, which are applicable only to PDFs, DeepDeSRT processes document images, which makes it equally suitable for born-digital PDFs (as they can automatically be converted into images) as well as even harder problems, e.g. scanned documents. To gauge the performance of DeepDeSRT, the system is evaluated on the publicly available ICDAR 2013 table competition dataset containing 67 documents with 238 pages overall. Evaluation results reveal that DeepDeSRT outperforms state-of-the-art methods for table detection and structure recognition and achieves F1-measures of 96.77% and 91.44% for table detection and structure recognition, respectively. Additionally, DeepDeSRT is evaluated on a closed dataset from a real use case of a major European aviation company comprising documents which are highly unlike those in ICDAR 2013. Tested on a randomly selected sample from this dataset, DeepDeSRT achieves high detection accuracy for tables which demonstrates the sound generalization capabilities of our system.
Global Table Extractor (GTE): A Framework for Joint Table Identification and Cell Structure Recognition Using Visual Context
Documents are often used for knowledge sharing and preservation in business and science, within which are tables that capture most of the critical data. Unfortunately, most documents are stored and distributed as PDF or scanned images, which fail to preserve logical table structure. Recent vision-based deep learning approaches have been proposed to address this gap, but most still cannot achieve state-of-the-art results. We present Global Table Extractor (GTE), a vision-guided systematic framework for joint table detection and cell structured recognition, which could be built on top of any object detection model. With GTE-Table, we invent a new penalty based on the natural cell containment constraint of tables to train our table network aided by cell location predictions. GTE-Cell is a new hierarchical cell detection network that leverages table styles. Further, we design a method to automatically label table and cell structure in existing documents to cheaply create a large corpus of training and test data. We use this to enhance PubTabNet with cell labels and create FinTabNet, real-world and complex scientific and financial datasets with detailed table structure annotations to help train and test structure recognition. Our framework surpasses previous state-of-the-art results on the ICDAR 2013 and ICDAR 2019 table competition in both table detection and cell structure recognition. Further experiments demonstrate a greater than 45% improvement in cell structure recognition when compared to a vanilla RetinaNet object detection model in our new out-of-domain FinTabNet.
neural network sensor network wireless sensor network wireless sensor deep learning comparative study base station information retrieval feature extraction sensor node programming language cellular network random field digital video number theory rate control network lifetime river basin hyperspectral imaging distributed algorithm chemical reaction carnegie mellon university fly ash visual feature boundary detection video retrieval diabetes mellitu semantic indexing oryza sativa water storage user association efficient wireles shot boundary shot boundary detection data assimilation system retrieval task controlled trial terrestrial television video search gps network sensor network consist efficient wireless sensor information retrieval task concept detection video captioning retrieval evaluation rice seed safety equipment endangered species station operation case study involving dublin city university high-level feature seed germination brown coal high plain study involving structure recognition climate experiment gravity recovery table structure land data assimilation instance search combinatorial number randomised controlled trial recovery and climate randomised controlled combinatorial number theory adult male high-level feature extraction complete proof music perception robust computation optimization-based method perception and cognition global land datum social perception terrestrial water storage trec video retrieval terrestrial water object-oriented conceptual video retrieval evaluation trec video seed variety base station operation table structure recognition transgenic rice concept detector total water storage groundwater storage regional gp grace gravity randomized distributed algorithm ibm tivoli workload scheduler cerebrovascular accident case study united state