Hierarchical Refined Attention for Scene Text Recognition

Recent years have witnessed increased interests in scene text recognition (STR). Current state-of-the-art (SOTA) approaches adopt sequence-to-sequence (Seq2Seq) structure to leverage the mutual interaction between images and textual information. However, these methods still struggle to recognize texts in arbitrary shapes. The leading cause is that it brings about information loss and negative noises when directly compressing two-dimension image features into one-dimension vectors. This paper proposes a novel framework named hierarchical refined attention network (HRAN) for STR. HRAN obtains refined representations with the hierarchical attention, which localizes the precise region of current character from two-dimension perspective. Two novel co-attention mechanisms, stacked and guided co-attention, explicitly leverage dependency between spatial-aware contextual features and region-aware visual features without extra character annotations. Experiments show that both on regular and irregular texts, HRAN achieves highly competitive performance compared to SOTA models.