Modeling response properties of V2 neurons using a hierarchical K-means model

Many computational models have been proposed for interpreting the properties of neurons in the primary visual cortex (V1). But relatively fewer models have been proposed for interpreting the properties of neurons beyond V1. Recently, it was found that the sparse deep belief network (DBN) could reproduce some properties of the secondary visual cortex (V2) neurons when trained on natural images. In this paper, by investigating the key factors that contribute to the success of the sparse DBN, we propose a hierarchical model based on a simple algorithm, K-means, which can be realized by competitive Hebbian learning. The resulting model exhibits some response properties of V2 neurons, and it is more biologically feasible and computationally efficient than the sparse DBN.

[1]  Xiaolin Hu,et al.  Learning Nonlinear Statistical Regularities in Natural Images by Modeling the Outer Product of Image Intensities , 2014, Neural Computation.

[2]  Geoffrey E. Hinton Training Products of Experts by Minimizing Contrastive Divergence , 2002, Neural Computation.

[3]  David J. Field,et al.  Sparse coding with an overcomplete basis set: A strategy employed by V1? , 1997, Vision Research.

[4]  T. Poggio,et al.  Hierarchical models of object recognition in cortex , 1999, Nature Neuroscience.

[5]  D. C. Essen,et al.  Neurons in monkey visual area V2 encode combinations of orientations , 2007, Nature Neuroscience.

[6]  Geoffrey E. Hinton A Practical Guide to Training Restricted Boltzmann Machines , 2012, Neural Networks: Tricks of the Trade.

[7]  Xiaolin Hu,et al.  A New Recurrent Neural Network for Solving Convex Quadratic Programming Problems With an Application to the $k$-Winners-Take-All Problem , 2009, IEEE Transactions on Neural Networks.

[8]  David J. Field,et al.  Emergence of simple-cell receptive field properties by learning a sparse code for natural images , 1996, Nature.

[9]  Alfred O. Hero,et al.  Efficient learning of sparse, distributed, convolutional feature representations for object recognition , 2011, 2011 International Conference on Computer Vision.

[10]  Yee Whye Teh,et al.  A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.

[11]  Michael S. Lewicki,et al.  Emergence of complex cell properties by learning to generalize in natural scenes , 2009, Nature.

[12]  T. Poggio,et al.  A model of V4 shape selectivity and invariance. , 2007, Journal of neurophysiology.

[13]  Christopher M. Bishop,et al.  Pattern Recognition and Machine Learning (Information Science and Statistics) , 2006 .

[14]  Michael S. Lewicki,et al.  A Hierarchical Bayesian Model for Learning Nonlinear Statistical Regularities in Nonstationary Natural Signals , 2005, Neural Computation.

[15]  Bruno A. Olshausen,et al.  Book Review , 2003, Journal of Cognitive Neuroscience.

[16]  Terrence J. Sejnowski,et al.  The “independent components” of natural scenes are edge filters , 1997, Vision Research.

[17]  Richard Granger,et al.  A cortical model of winner-take-all competition via lateral inhibition , 1992, Neural Networks.

[18]  Marc'Aurelio Ranzato,et al.  Building high-level features using large scale unsupervised learning , 2011, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[19]  Honglak Lee,et al.  An Analysis of Single-Layer Networks in Unsupervised Feature Learning , 2011, AISTATS.

[20]  D H HUBEL,et al.  RECEPTIVE FIELDS AND FUNCTIONAL ARCHITECTURE IN TWO NONSTRIATE VISUAL AREAS (18 AND 19) OF THE CAT. , 1965, Journal of neurophysiology.

[21]  Minami Ito,et al.  Representation of Angles Embedded within Contour Stimuli in Area V2 of Macaque Monkeys , 2004, The Journal of Neuroscience.

[22]  Honglak Lee,et al.  Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations , 2009, ICML '09.

[23]  Marc'Aurelio Ranzato,et al.  Sparse Feature Learning for Deep Belief Networks , 2007, NIPS.

[24]  Peter Dayan,et al.  Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems , 2001 .

[25]  Andrew Y. Ng,et al.  Unsupervised learning models of primary cortical receptive fields and receptive field plasticity , 2011, NIPS.

[26]  Honglak Lee,et al.  Sparse deep belief net model for visual area V2 , 2007, NIPS.

[27]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[28]  Xiaolin Hu,et al.  An Improved Dual Neural Network for Solving a Class of Quadratic Programming Problems and Its $k$-Winners-Take-All Application , 2008, IEEE Transactions on Neural Networks.