Line attractor dynamics in recurrent networks for sentiment classification

Recurrent neural networks (RNNs) are a powerful tool for modeling sequential data. Despite their widespread usage, understanding how RNNs solve complex problems remains elusive. Here, we characterize how popular off-the-shelf architectures (including LSTMs, GRUs, and vanilla RNNs) perform document-level sentiment classification. Despite their theoretical capacity to implement complex, high-dimensional computations, we find that all architectures converge to highly interpretable, low-dimensional representations. We identify a simple mechanism, integration along an approximate line attractor, and find this mechanism present across RNN architectures (including LSTMs, GRUs, and vanilla RNNs). Overall, these results demonstrate that surprisingly universal and human interpretable computations can arise across a range of RNNs.