Convolutional Neural Network with Biologically Inspired Retinal Structure

Abstract In this paper, we propose a new Convolutional Neural Network (CNN) with biologically inspired retinal structure and ON/OFF Rectified Linear Unit (ON/OFF ReLU). Retinal structure enhances input images by center surround difference of green-red and blue-yellow components, which in turn creates positive as well as negative features like ON/OFF visual pathway of retina to make a total of 12 feature channels. This ON/OFF concept is also adopted to each convolutional layer of CNN. We prefer to call this ON/OFF ReLU. In contrast, conventional ReLU passes only positive features of each convolutional layer and may loose important information from negative features. Moreover, it also happens to loose learning chance if results are saturated to zero. However, in our proposed model, we use both positive and negative information, which provides a possibility to learn through negative results. We also present the experimental results conducted on CIFAR-10 dataset and atrial fibrillation prediction for health monitoring, and show how effectively the negative information and retinal structure improves the performance of conventional CNN.

[1]  Jian Sun,et al.  Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[2]  Yoshua Bengio,et al.  Deep Sparse Rectifier Neural Networks , 2011, AISTATS.

[3]  Minho Lee,et al.  Implementation of Visual Attention System Using Artificial Retina Chip and Bottom-Up Saliency Map Model , 2011, ICONIP.

[4]  M. Meister,et al.  Decorrelation and efficient coding by retinal ganglion cells , 2012, Nature Neuroscience.

[5]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[6]  J. Dowling,et al.  Organization of the retina of the mudpuppy, Necturus maculosus. II. Intracellular recording. , 1969, Journal of neurophysiology.

[7]  Peter H. Schiller,et al.  Parallel information processing channels created in the retina , 2010, Proceedings of the National Academy of Sciences.

[8]  C. Koch,et al.  Computational modelling of visual attention , 2001, Nature Reviews Neuroscience.

[9]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[10]  Geoffrey E. Hinton,et al.  Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.

[11]  H. Sompolinsky,et al.  Benefits of Pathway Splitting in Sensory Coding , 2014, The Journal of Neuroscience.

[12]  Minho Lee,et al.  Dynamic visual selective attention model , 2008, Neurocomputing.

[13]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[14]  Trevor Darrell,et al.  Caffe: Convolutional Architecture for Fast Feature Embedding , 2014, ACM Multimedia.

[15]  D. Hubel,et al.  Receptive fields, binocular interaction and functional architecture in the cat's visual cortex , 1962, The Journal of physiology.

[16]  H. K. Hartline,et al.  THE RESPONSE OF SINGLE OPTIC NERVE FIBERS OF THE VERTEBRATE EYE TO ILLUMINATION OF THE RETINA , 1938 .