Attentional Neural Network: Feature Selection Using Cognitive Feedback

Attentional Neural Network is a new framework that integrates top-down cognitive bias and bottom-up feature extraction in one coherent architecture. The top-down influence is especially effective when dealing with high noise or difficult segmentation problems. Our system is modular and extensible. It is also easy to train and cheap to run, and yet can accommodate complex behaviors. We obtain classification accuracy better than or competitive with state of art results on the MNIST variation dataset, and successfully disentangle overlaid digits with high success rates. We view such a general purpose framework as an essential foundation for a larger system emulating the cognitive abilities of the whole brain.

[1]  Geoffrey E. Hinton,et al.  Deep Boltzmann Machines , 2009, AISTATS.

[2]  Denis Fize,et al.  Speed of processing in the human visual system , 1996, Nature.

[3]  Pascal Vincent,et al.  Contractive Auto-Encoders: Explicit Invariance During Feature Extraction , 2011, ICML.

[4]  Yee Whye Teh,et al.  A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.

[5]  L. Itti,et al.  Mechanisms of top-down attention , 2011, Trends in Neurosciences.

[6]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Neural Networks , 2013 .

[7]  S Ullman,et al.  Sequence seeking and counter streams: a computational model for bidirectional information flow in the visual cortex. , 1995, Cerebral cortex.

[8]  Yoshua Bengio,et al.  Classification using discriminative restricted Boltzmann machines , 2008, ICML '08.

[9]  Günther Palm,et al.  Iterative retrieval of sparsely coded associative memory patterns , 1996, Neural Networks.

[10]  Krista A. Ehinger,et al.  Modelling search for people in 900 scenes: A combined source model of eye guidance , 2009 .

[11]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[12]  Geoffrey E. Hinton,et al.  Implicit Mixtures of Restricted Boltzmann Machines , 2008, NIPS.

[13]  Alexander G. Huth,et al.  Attention During Natural Vision Warps Semantic Representation Across the Human Brain , 2013, Nature Neuroscience.

[14]  H. Sebastian Seung,et al.  Natural Image Denoising with Convolutional Networks , 2008, NIPS.

[15]  Yoshua Bengio,et al.  Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.

[16]  Peggy Seriès,et al.  A Hierarchical Generative Model of Recurrent Object-Based Attention in the Visual Cortex , 2011, ICANN.

[17]  Yair Weiss,et al.  From learning models of natural image patches to whole image restoration , 2011, 2011 International Conference on Computer Vision.

[18]  Joel Z. Leibo,et al.  Throwing Down the Visual Intelligence Gauntlet , 2013, Machine Learning for Computer Vision.

[19]  P. Cochat,et al.  Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.

[20]  Chris Eliasmith,et al.  Deep networks for robust visual recognition , 2010, ICML.

[21]  Myriam Chanceaux,et al.  The influence of clutter on real-world scene search: evidence from search efficiency and eye movements. , 2009, Journal of vision.

[22]  John K. Tsotsos,et al.  Attention links sensing to recognition , 2008, Image Vis. Comput..

[23]  Honglak Lee,et al.  Learning and Selecting Features Jointly with Point-wise Gated Boltzmann Machines , 2013, ICML.