Extensions and Limitations of the Neural GPU

The Neural GPU is a recent model that can learn algorithms such as multi-digit binary addition and binary multiplication in a way that generalizes to inputs of arbitrary length. We show that there are two simple ways of improving the performance of the Neural GPU: by carefully designing a curriculum, and by increasing model size. The latter requires a memory efficient implementation, as a naive implementation of the Neural GPU is memory intensive. We find that these techniques increase the set of algorithmic problems that can be solved by the Neural GPU: we have been able to learn to perform all the arithmetic operations (and generalize to arbitrarily long numbers) when the arguments are given in the decimal representation (which, surprisingly, has not been possible before). We have also been able to train the Neural GPU to evaluate long arithmetic expressions with multiple operands that require respecting the precedence order of the operands, although these have succeeded only in their binary representation, and not with perfect accuracy. In addition, we gain insight into the Neural GPU by investigating its failure modes. We find that Neural GPUs that correctly generalize to arbitrarily long numbers still fail to compute the correct answer on highly-symmetric, atypical inputs: for example, a Neural GPU that achieves near-perfect generalization on decimal multiplication of up to 100-digit long numbers can fail on $000000\dots002 \times 000000\dots002$ while succeeding at $2 \times 2$. These failure modes are reminiscent of adversarial examples.

[1]  Mark Wineberg,et al.  A Representation Scheme To Perform Program Induction in a Canonical Genetic Algorithm , 1994, PPSN.

[2]  Wojciech Zaremba,et al.  Learning Simple Algorithms from Examples , 2015, ICML.

[3]  Ray J. Solomonoff,et al.  A Formal Theory of Inductive Inference. Part I , 1964, Inf. Control..

[4]  Alex Graves,et al.  Adaptive Computation Time for Recurrent Neural Networks , 2016, ArXiv.

[5]  Stephen Wolfram,et al.  Cellular automata as models of complexity , 1984, Nature.

[6]  Jason Weston,et al.  Weakly Supervised Memory Networks , 2015, ArXiv.

[7]  Ilya Sutskever,et al.  Training Deep and Recurrent Networks with Hessian-Free Optimization , 2012, Neural Networks: Tricks of the Trade.

[8]  Tomas Mikolov,et al.  Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets , 2015, NIPS.

[9]  Dan Klein,et al.  Learning Dependency-Based Compositional Semantics , 2011, CL.

[10]  Wojciech Zaremba,et al.  Reinforcement Learning Neural Turing Machines , 2015, ArXiv.

[11]  David E. Goldberg,et al.  Genetic Algorithms in Search Optimization and Machine Learning , 1988 .

[12]  Wojciech Zaremba,et al.  Reinforcement Learning Neural Turing Machines - Revised , 2015 .

[13]  Risto Miikkulainen,et al.  Accelerated Neural Evolution through Cooperatively Coevolved Synapses , 2008, J. Mach. Learn. Res..

[14]  Ray J. Solomonoff,et al.  A Formal Theory of Inductive Inference. Part II , 1964, Inf. Control..

[15]  Xinyun Chen Under Review as a Conference Paper at Iclr 2017 Delving into Transferable Adversarial Ex- Amples and Black-box Attacks , 2016 .

[16]  Wojciech Zaremba,et al.  Learning to Execute , 2014, ArXiv.

[17]  Phil Blunsom,et al.  Learning to Transduce with Unbounded Memory , 2015, NIPS.

[18]  E. F. Codd,et al.  Cellular automata , 1968 .

[19]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[20]  Alex Graves,et al.  Neural Turing Machines , 2014, ArXiv.

[21]  Xiang Zhang,et al.  OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks , 2013, ICLR.

[22]  Lukasz Kaiser,et al.  Neural GPUs Learn Algorithms , 2015, ICLR.

[23]  Franco Bagnoli,et al.  Cellular Automata , 2002, Lecture Notes in Computer Science.

[24]  Marcin Andrychowicz,et al.  Neural Random Access Machines , 2015, ERCIM News.

[25]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[26]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[27]  Peter Nordin,et al.  Evolutionary program induction of binary machine code and its applications , 1997 .

[28]  Jason Weston,et al.  Curriculum learning , 2009, ICML '09.

[29]  Marcin Andrychowicz,et al.  Learning Efficient Algorithms with Hierarchical Attentive Memory , 2016, ArXiv.

[30]  Jason Weston,et al.  End-To-End Memory Networks , 2015, NIPS.

[31]  Alex Graves,et al.  Memory-Efficient Backpropagation Through Time , 2016, NIPS.

[32]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[33]  Alex Graves,et al.  Grid Long Short-Term Memory , 2015, ICLR.