Deep Continuous Networks

CNNs and computational models of biological vision share some fundamental principles, which opened new avenues of research. However, fruitful cross-field research is hampered by conventional CNN architectures being based on spatially and depthwise discrete representations, which cannot accommodate certain aspects of biological complexity such as continuously varying receptive field sizes and dynamics of neuronal responses. Here we propose deep continuous networks (DCNs), which combine spatially continuous filters, with the continuous depth framework of neural ODEs. This allows us to learn the spatial support of the filters during training, as well as model the continuous evolution of feature maps, linking DCNs closely to biological models. We show that DCNs are versatile and highly applicable to standard image classification and reconstruction problems, where they improve parameter and data efficiency, and allow for metaparametrization. We illustrate the biological plausibility of the scale distributions learned by DCNs and explore their performance in a neuroscientifically inspired pattern completion task. Finally, we investigate an efficient implementation of DCNs by changing input contrast.

[1]  Guangyu R. Yang,et al.  Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework , 2016, PLoS Comput. Biol..

[2]  Terrence J Sejnowski,et al.  The unreasonable effectiveness of deep learning in artificial intelligence , 2020, Proceedings of the National Academy of Sciences.

[3]  Nematollah Batmanghelich,et al.  Deep Diffeomorphic Normalizing Flows , 2018, ArXiv.

[4]  Aran Nayebi,et al.  Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs , 2019, NeurIPS.

[5]  A. Angelucci,et al.  Contribution of feedforward, lateral and feedback connections to the classical receptive field center and extra-classical receptive field surround of primate V1 neurons. , 2006, Progress in brain research.

[6]  Surya Ganguli,et al.  A Unified Theory Of Early Visual Representations From Retina To Cortex Through Anatomically Constrained Deep CNNs , 2019, bioRxiv.

[7]  Ruosong Wang,et al.  Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks , 2019, ICLR.

[8]  Stéphane Mallat,et al.  Invariant Scattering Convolution Networks , 2012, IEEE transactions on pattern analysis and machine intelligence.

[9]  Geoffrey E. Hinton,et al.  Using Fast Weights to Attend to the Recent Past , 2016, NIPS.

[10]  Jonas Kubilius,et al.  Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like? , 2018, bioRxiv.

[11]  Leo L. Lui,et al.  Spatial and temporal frequency tuning in striate cortex: functional uniformity and specializations related to receptive field eccentricity , 2010, The European journal of neuroscience.

[12]  Bin Dong,et al.  Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equations , 2017, ICML.

[13]  Tomaso A. Poggio,et al.  Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex , 2016, ArXiv.

[14]  Robert A. Frazor,et al.  Visual cortex neurons of monkeys and cats: temporal dynamics of the spatial frequency response function. , 2004, Journal of neurophysiology.

[15]  Chen Chen,et al.  Gabor Convolutional Networks , 2017, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV).

[16]  Stephen Coombes,et al.  Waves, bumps, and patterns in neural field theories , 2005, Biological Cybernetics.

[17]  H. Sompolinsky,et al.  Theory of orientation tuning in visual cortex. , 1995, Proceedings of the National Academy of Sciences of the United States of America.

[18]  David Duvenaud,et al.  FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models , 2018, ICLR.

[19]  L. F. Abbott,et al.  Generating Coherent Patterns of Activity from Chaotic Neural Networks , 2009, Neuron.

[20]  J. Duncan,et al.  Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE , 2020, ICML.

[21]  Tony Lindeberg,et al.  Scale-Space Theory in Computer Vision , 1993, Lecture Notes in Computer Science.

[22]  Yuan Yuan,et al.  Variational Context-Deformable ConvNets for Indoor Scene Parsing , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[23]  Eldad Haber,et al.  Deep Neural Networks Motivated by Partial Differential Equations , 2018, Journal of Mathematical Imaging and Vision.

[24]  A. T. Smith,et al.  Estimating receptive field size from fMRI data in human striate and extrastriate visual cortex. , 2001, Cerebral cortex.

[25]  M. Golubitsky,et al.  Geometric visual hallucinations, Euclidean symmetry and the functional architecture of striate cortex. , 2001, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.

[26]  Max A. Viergever,et al.  Scale and the differential structure of images , 1992, Image Vis. Comput..

[27]  Peter Ford Dominey,et al.  Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex , 2016, PLoS Comput. Biol..

[28]  Jonathan T. Barron,et al.  Continuously Differentiable Exponential Linear Units , 2017, ArXiv.

[29]  S. Amari Dynamics of pattern formation in lateral-inhibition type neural fields , 1977, Biological Cybernetics.

[30]  Tony Lindeberg,et al.  Discrete derivative approximations with scale-space properties: A basis for low-level feature extraction , 1993, Journal of Mathematical Imaging and Vision.

[31]  James J. DiCarlo,et al.  Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior , 2018, Nature Neuroscience.

[32]  François Lauze,et al.  Supervised scale-regularized linear convolutionary filters , 2017, BMVC 2017.

[33]  Robert A. Frazor,et al.  Visual cortex neurons of monkeys and cats: temporal dynamics of the contrast response function. , 2002, Journal of neurophysiology.

[34]  Christopher Kim,et al.  Learning recurrent dynamics in spiking networks , 2018, bioRxiv.

[35]  Sommers,et al.  Chaos in random neural networks. , 1988, Physical review letters.

[36]  Ronan Fablet,et al.  Residual Networks as Flows of Diffeomorphisms , 2019, Journal of Mathematical Imaging and Vision.

[37]  Arnold W. M. Smeulders,et al.  Structured Receptive Fields in CNNs , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[38]  S. Dumoulin,et al.  The Relationship between Cortical Magnification Factor and Population Receptive Field Size in Human Visual Cortex: Constancies in Cortical Architecture , 2011, The Journal of Neuroscience.

[39]  Michael C. Frank,et al.  Unsupervised neural network models of the ventral visual stream , 2020, Proceedings of the National Academy of Sciences.

[40]  Surya Ganguli,et al.  A deep learning framework for neuroscience , 2019, Nature Neuroscience.

[41]  J. B. Levitt,et al.  A model for the depth-dependence of receptive field size and contrast sensitivity of cells in layer 4C of macaque striate cortex , 1999, Vision Research.

[42]  L. Abbott,et al.  From fixed points to chaos: Three models of delayed discrimination , 2013, Progress in Neurobiology.

[43]  Marco Loog,et al.  Resolution Learning in Deep Convolutional Networks Using Scale-Space Theory , 2021, IEEE Transactions on Image Processing.

[44]  W. Newsome,et al.  Context-dependent computation by recurrent dynamics in prefrontal cortex , 2013, Nature.

[45]  Nikolaus Kriegeskorte,et al.  Deep Neural Networks in Computational Neuroscience , 2019 .

[46]  Terrence J. Sejnowski,et al.  Simple framework for constructing functional spiking recurrent neural networks , 2019, Proceedings of the National Academy of Sciences.

[47]  Y. Chino,et al.  Receptive‐field properties of V1 and V2 neurons in mice and macaque monkeys , 2010, The Journal of comparative neurology.

[48]  I. Ohzawa,et al.  The effects of contrast on visual orientation and spatial frequency discrimination: a comparison of single cells and behavior. , 1987, Journal of neurophysiology.

[49]  S. Nelson,et al.  An emergent model of orientation selectivity in cat visual cortical simple cells , 1995, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[50]  Yee Whye Teh,et al.  Augmented Neural ODEs , 2019, NeurIPS.

[51]  Max A. Viergever,et al.  The Gaussian scale-space paradigm and the multiscale local jet , 1996, International Journal of Computer Vision.

[52]  Ivan Sosnovik,et al.  Scale-Equivariant Steerable Networks , 2020, ICLR.

[53]  Zhaoping Li,et al.  A Neural Model of Contour Integration in the Primary Visual Cortex , 1998, Neural Computation.

[54]  Trevor Darrell,et al.  Blurring the Line Between Structure and Learning to Optimize and Adapt Receptive Fields , 2019, ArXiv.

[55]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[56]  Adam M. Oberman,et al.  How to train your neural ODE , 2020, ICML.

[57]  R. Freeman,et al.  Orientation selectivity in the cat's striate cortex is invariant with stimulus contrast , 2004, Experimental Brain Research.

[58]  Hajime Asama,et al.  Dissecting Neural ODEs , 2020, NeurIPS.

[59]  David Cox,et al.  Recurrent computations for visual pattern completion , 2017, Proceedings of the National Academy of Sciences.

[60]  Leon A. Gatys,et al.  Deep convolutional models improve predictions of macaque V1 responses to natural images , 2019, PLoS Comput. Biol..

[61]  Kyoung Mu Lee,et al.  Deeply-Recursive Convolutional Network for Image Super-Resolution , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[62]  N. Parga,et al.  Dynamic Control of Response Criterion in Premotor Cortex during Perceptual Detection under Temporal Uncertainty , 2015, Neuron.

[63]  Liam Paninski,et al.  Multilayer Recurrent Network Models of Primate Retinal Ganglion Cell Responses , 2016, ICLR.

[64]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[65]  R. Shapley,et al.  Area (mt) Spatial Summation, End Inhibition and Side Inhibition in the Middle Temporal Visual Adaptation Complex Cells Increase Their Phase Sensitivity at Low Contrasts and following Dependence of Response Properties on Sparse Connectivity in a Spiking Neuron Model Of , 2022 .

[66]  Stéphane Mallat,et al.  Group Invariant Scattering , 2011, ArXiv.

[67]  Nikolaus Kriegeskorte,et al.  Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition , 2017, bioRxiv.

[68]  Christopher D. Harvey,et al.  Recurrent Network Models of Sequence Generation and Memory , 2016, Neuron.

[69]  M. V. Tsodyks,et al.  Intracortical origin of visual maps , 2001, Nature Neuroscience.

[70]  David Duvenaud,et al.  Neural Ordinary Differential Equations , 2018, NeurIPS.

[71]  Dean V. Buonomano,et al.  ROBUST TIMING AND MOTOR PATTERNS BY TAMING CHAOS IN RECURRENT NEURAL NETWORKS , 2012, Nature Neuroscience.

[72]  Peter Dayan,et al.  Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems , 2001 .

[73]  Kaiming He,et al.  Group Normalization , 2018, ECCV.

[74]  Alexander S. Ecker,et al.  A rotation-equivariant convolutional neural network model of primary visual cortex , 2018, ICLR.

[75]  Francesca Mastrogiuseppe,et al.  Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural Networks , 2017, Neuron.

[76]  Edward H. Adelson,et al.  Shiftable multiscale transforms , 1992, IEEE Trans. Inf. Theory.

[77]  Stephan J. Garbin,et al.  Harmonic Networks: Deep Translation and Rotation Equivariance , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[78]  Xiaolin Hu,et al.  Recurrent convolutional neural network for object recognition , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[79]  Alexander S. Ecker,et al.  Neural system identification for large populations separating "what" and "where" , 2017, NIPS.

[80]  J. P. Jones,et al.  An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. , 1987, Journal of neurophysiology.

[81]  T. Lindeberg,et al.  Foveal scale-space and the linear increase of receptive field size as a function of eccentricity , 1994 .

[82]  Kenneth O. Stanley,et al.  Differentiable plasticity: training plastic neural networks with backpropagation , 2018, ICML.

[83]  Unsupervised neural network models of the ventral visual stream , 2021, Proceedings of the National Academy of Sciences.