Neural network models have been widely used in natural language processing (NLP). Recurrent neural network (RNNs) has proved to be a powerful sequence model. Gated recurrent unit (GRU) is one kind of RNNs which has achieved excellent performance in NLP. Nevertheless, because of the sparsity and high dimensionality of text data, there are some difficulties in complex semantic representations. To solve these problems, a novel and efficient method is proposed in this paper for text classification. The proposed model is called multi-scale convolutional attention based GRU network (MCA-GRU). In MCA-GRU, one-dimension convolutions with dense connections extract attention signals from text sequences. Then the attention signals are combined with features of GRU network. MCA-GRU is able to capture the local feature of phrases and sequence information. Experimental verifications are conducted on five text classification datasets. The results clearly show that the proposed model MCA-GRU approach achieves equivalent or even superior performance than other state-of-the-art text classification methods.
[1]
Kilian Q. Weinberger,et al.
Densely Connected Convolutional Networks
,
2016,
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[2]
Christopher D. Manning,et al.
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
,
2015,
ACL.
[3]
Ting Liu,et al.
Attention-over-Attention Neural Networks for Reading Comprehension
,
2016,
ACL.
[4]
Yoshua Bengio,et al.
Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation
,
2014,
EMNLP.
[5]
Ruifeng Xu,et al.
A Convolutional Attention Model for Text Classification
,
2017,
NLPCC.
[6]
Asim Kadav,et al.
A Context-aware Attention Network for Interactive Question Answering
,
2016,
KDD.
[7]
Zhiyuan Liu,et al.
A C-LSTM Neural Network for Text Classification
,
2015,
ArXiv.
[8]
Diyi Yang,et al.
Hierarchical Attention Networks for Document Classification
,
2016,
NAACL.