LSTM and GRU Neural Network Performance Comparison Study: Taking Yelp Review Dataset as an Example

Long short-term memory networks(LSTM) and gate recurrent unit networks(GRU) are two popular variants of recurrent neural networks(RNN) with long-term memory. This study compares the performance differences of these two deep learning models, involving two dimensions: dataset size for training, long/short text, and quantitative evaluation on five indicators including running speed, accuracy, recall, F1 value, and AUC. The corpus uses the datasets officially released by Yelp Inc. In terms of model training speed, GRU is 29.29% faster than LSTM for processing the same dataset; and in terms of performance, GRU performance will surpass LSTM in the scenario of long text and small dataset, and inferior to LSTM in other scenarios. Considering the two dimensions of both performance and computing power cost, the performance-cost ratio of GRU is higher than that of LSTM, which is 23.45%, 27.69%, and 26.95% higher in accuracy ratio, recall ratio, and F1 ratio respectively.