Extensions and Limitations of the Neural GPU

The Neural GPU is a recent model that can learn algorithms such as multi-digit binary addition and binary multiplication in a way that generalizes to inputs of arbitrary length. We show that there are two simple ways of improving the performance of the Neural GPU: by carefully designing a curriculum, and by increasing model size. The latter requires a memory efficient implementation, as a naive implementation of the Neural GPU is memory intensive. We find that these techniques increase the set of algorithmic problems that can be solved by the Neural GPU: we have been able to learn to perform all the arithmetic operations (and generalize to arbitrarily long numbers) when the arguments are given in the decimal representation (which, surprisingly, has not been possible before). We have also been able to train the Neural GPU to evaluate long arithmetic expressions with multiple operands that require respecting the precedence order of the operands, although these have succeeded only in their binary representation, and not with perfect accuracy. In addition, we gain insight into the Neural GPU by investigating its failure modes. We find that Neural GPUs that correctly generalize to arbitrarily long numbers still fail to compute the correct answer on highly-symmetric, atypical inputs: for example, a Neural GPU that achieves near-perfect generalization on decimal multiplication of up to 100-digit long numbers can fail on $000000\dots002 \times 000000\dots002$ while succeeding at $2 \times 2$. These failure modes are reminiscent of adversarial examples.

[1]  Alex Graves,et al.  Adaptive Computation Time for Recurrent Neural Networks , 2016, ArXiv.

[2]  Wojciech Zaremba,et al.  Learning to Execute , 2014, ArXiv.

[3]  Xiang Zhang,et al.  OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks , 2013, ICLR.

[4]  Franco Bagnoli,et al.  Cellular Automata , 2002, Lecture Notes in Computer Science.

[5]  David E. Goldberg,et al.  Genetic Algorithms in Search Optimization and Machine Learning , 1988 .

[6]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[7]  Dan Klein,et al.  Learning Dependency-Based Compositional Semantics , 2011, CL.

[8]  Ray J. Solomonoff,et al.  A Formal Theory of Inductive Inference. Part I , 1964, Inf. Control..

[9]  Wojciech Zaremba,et al.  Learning Simple Algorithms from Examples , 2015, ICML.

[10]  Jason Weston,et al.  End-To-End Memory Networks , 2015, NIPS.

[11]  Wojciech Zaremba,et al.  Reinforcement Learning Neural Turing Machines - Revised , 2015 .

[12]  Alex Graves,et al.  Grid Long Short-Term Memory , 2015, ICLR.

[13]  Lukasz Kaiser,et al.  Neural GPUs Learn Algorithms , 2015, ICLR.

[14]  Mark Wineberg,et al.  A Representation Scheme To Perform Program Induction in a Canonical Genetic Algorithm , 1994, PPSN.

[15]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[16]  Jason Weston,et al.  Curriculum learning , 2009, ICML '09.

[17]  Peter Nordin,et al.  Evolutionary program induction of binary machine code and its applications , 1997 .

[18]  Marcin Andrychowicz,et al.  Learning Efficient Algorithms with Hierarchical Attentive Memory , 2016, ArXiv.

[19]  Xinyun Chen Under Review as a Conference Paper at Iclr 2017 Delving into Transferable Adversarial Ex- Amples and Black-box Attacks , 2016 .

[20]  Risto Miikkulainen,et al.  Accelerated Neural Evolution through Cooperatively Coevolved Synapses , 2008, J. Mach. Learn. Res..

[21]  Stephen Wolfram,et al.  Cellular automata as models of complexity , 1984, Nature.

[22]  Marcin Andrychowicz,et al.  Neural Random Access Machines , 2015, ERCIM News.

[23]  Jason Weston,et al.  Weakly Supervised Memory Networks , 2015, ArXiv.

[24]  E. F. Codd,et al.  Cellular automata , 1968 .

[25]  Ray J. Solomonoff,et al.  A Formal Theory of Inductive Inference. Part II , 1964, Inf. Control..

[26]  Alex Graves,et al.  Neural Turing Machines , 2014, ArXiv.

[27]  Tomas Mikolov,et al.  Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets , 2015, NIPS.

[28]  Phil Blunsom,et al.  Learning to Transduce with Unbounded Memory , 2015, NIPS.

[29]  Wojciech Zaremba,et al.  Reinforcement Learning Neural Turing Machines , 2015, ArXiv.

[30]  Ilya Sutskever,et al.  Training Deep and Recurrent Networks with Hessian-Free Optimization , 2012, Neural Networks: Tricks of the Trade.

[31]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[32]  Alex Graves,et al.  Memory-Efficient Backpropagation Through Time , 2016, NIPS.

[33]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.