No-Prop-fast - A High-Speed Multilayer Neural Network Learning Algorithm: MNIST Benchmark and Eye-Tracking Data Classification

While the No-Prop (no back propagation) algorithm uses the delta rule to train the output layer of a feed-forward network, No-Prop-fast employs fast linear regression learning using the Hopf-Wiener solution. Ten times faster learning speeds can be achieved on large datasets like the MNIST benchmark, compared to one of the fastest backpropagation algorithm known. Additionally, the plain feed-forward network No-prop-fast can distinguish gaze movements on cartoons with and without text, as well as age-specific attention shifts between text and picture areas with minimal pre-processing.

[1]  Michael T. Manry,et al.  LMS learning algorithms: misconceptions and new results on converence , 2000, IEEE Trans. Neural Networks Learn. Syst..

[2]  Helge J. Ritter,et al.  A neural network for 3D gaze recording with binocular eye trackers , 2006, Int. J. Parallel Emergent Distributed Syst..

[3]  Fernanda Ferreira,et al.  Scene Perception for Psycholinguists. , 2004 .

[4]  Bernard Widrow,et al.  Adaptive switching circuits , 1988 .

[5]  Meng-Hiot Lim Comments on the "No-Prop" algorithm , 2013, Neural Networks.

[6]  Martin Fodslette Møller,et al.  A scaled conjugate gradient algorithm for fast supervised learning , 1993, Neural Networks.

[7]  André Frank Krause,et al.  Evolutionary Optimization of Echo State Networks: Multiple Motor Pattern Learning , 2010, ANNIIP.

[8]  Luca Maria Gambardella,et al.  Deep, Big, Simple Neural Nets for Handwritten Digit Recognition , 2010, Neural Computation.

[9]  Jürgen Schmidhuber,et al.  Multi-column deep neural networks for image classification , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[10]  K. Rayner Eye movements in reading and information processing: 20 years of research. , 1998, Psychological bulletin.

[11]  R. Buckner Memory and Executive Function in Aging and AD Multiple Factors that Cause Decline and Reserve Factors that Compensate , 2004, Neuron.

[12]  Kenneth Holmqvist,et al.  Eye tracking: a comprehensive guide to methods and measures , 2011 .

[13]  Harald Haas,et al.  Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication , 2004, Science.

[14]  Klaus-Robert Müller,et al.  Efficient BackProp , 2012, Neural Networks: Tricks of the Trade.

[15]  Bernard Widrow,et al.  The No-Prop algorithm: A new learning algorithm for multilayer neural networks , 2013, Neural Networks.

[16]  Zheru Chi,et al.  A Time Delay Neural Network model for simulating eye gaze data , 2011, J. Exp. Theor. Artif. Intell..

[17]  David E. Irwin,et al.  Age Differences in the Control of Looking Behavior: Do You Know Where Your Eyes Have Been? , 2000, Psychological science.

[18]  Marc'Aurelio Ranzato,et al.  Efficient Learning of Sparse Representations with an Energy-Based Model , 2006, NIPS.

[19]  William H. Press,et al.  Numerical Recipes in FORTRAN - The Art of Scientific Computing, 2nd Edition , 1987 .

[20]  André Frank Krause,et al.  Classifying the Differences in Gaze Patterns of Alphabetic and Logographic L1 Readers - A Neural Network Approach , 2011, EANN/AIAI.

[21]  Tamás D. Gedeon,et al.  Gaze Pattern and Reading Comprehension , 2010, ICONIP.

[22]  Tamás D. Gedeon,et al.  A Hybrid Fuzzy Approach for Human Eye Gaze Pattern Recognition , 2008, ICONIP.

[23]  Yoshua Bengio,et al.  Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies , 2001 .

[24]  Chee Kheong Siew,et al.  Extreme learning machine: Theory and applications , 2006, Neurocomputing.