Factorized Neural Processes for Neural Processes: $K$-Shot Prediction of Neural Responses

In recent years, artificial neural networks have achieved state-of-the-art performance for predicting the responses of neurons in the visual cortex to natural stimuli. However, they require a time consuming parameter optimization process for accurately modeling the tuning function of newly observed neurons, which prohibits many applications including real-time, closed-loop experiments. We overcome this limitation by formulating the problem as $K$-shot prediction to directly infer a neuron's tuning function from a small set of stimulus-response pairs using a Neural Process. This required us to developed a Factorized Neural Process, which embeds the observed set into a latent space partitioned into the receptive field location and the tuning function properties. We show on simulated responses that the predictions and reconstructed receptive fields from the Factorized Neural Process approach ground truth with increasing number of trials. Critically, the latent representation that summarizes the tuning function of a neuron is inferred in a quick, single forward pass through the network. Finally, we validate this approach on real neural data from visual cortex and find that the predictive accuracy is comparable to -- and for small $K$ even greater than -- optimization based approaches, while being substantially faster. We believe this novel deep learning systems identification framework will facilitate better real-time integration of artificial neural network modeling into neuroscience experiments.

[1]  Ha Hong,et al.  Performance-optimized hierarchical models predict neural responses in higher visual cortex , 2014, Proceedings of the National Academy of Sciences.

[2]  James A. Bednar,et al.  Model Constrained by Visual Hierarchy Improves Prediction of Neural Responses to Natural Scenes , 2016, PLoS Comput. Biol..

[3]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[4]  Yee Whye Teh,et al.  Attentive Neural Processes , 2019, ICLR.

[5]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[6]  Frank Hutter,et al.  A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets , 2017, ArXiv.

[7]  Alexander S. Ecker,et al.  Stimulus domain transfer in recurrent models for large scale cortical population prediction on video , 2018, NeurIPS.

[8]  Emmanuelle Gouillart,et al.  scikit-image: image processing in Python , 2014, PeerJ.

[9]  Mitko Veta,et al.  Roto-Translation Covariant Convolutional Networks for Medical Image Analysis , 2018, MICCAI.

[10]  James J DiCarlo,et al.  Neural population control via deep image synthesis , 2018, Science.

[11]  Eero P. Simoncelli,et al.  A Convolutional Subunit Model for Neuronal Responses in Macaque V1 , 2015, The Journal of Neuroscience.

[12]  Leon A. Gatys,et al.  Deep convolutional models improve predictions of macaque V1 responses to natural images , 2019, PLoS Comput. Biol..

[13]  Alexander S. Ecker,et al.  Inception loops discover what excites neurons most using deep predictive models , 2019, Nature Neuroscience.

[14]  Alexander S. Ecker,et al.  DataJoint: managing big scientific data using MATLAB or Python , 2015, bioRxiv.

[15]  Andrew Zisserman,et al.  Spatial Transformer Networks , 2015, NIPS.

[16]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[17]  Max Welling,et al.  Steerable CNNs , 2016, ICLR.

[18]  Alexander S. Ecker,et al.  Rotation-invariant clustering of neuronal responses in primary visual cortex , 2020, ICLR.

[19]  Alexander S. Ecker,et al.  Neural system identification for large populations separating "what" and "where" , 2017, NIPS.

[20]  Remco Duits,et al.  Roto-Translation Equivariant Convolutional Networks: Application to Histopathology Image Analysis , 2020, Medical Image Anal..

[21]  Alexander J. Smola,et al.  Deep Sets , 2017, 1703.06114.

[22]  Kilian Q. Weinberger,et al.  Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[23]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[24]  Liam Paninski,et al.  Multilayer Recurrent Network Models of Primate Retinal Ganglion Cell Responses , 2016, ICLR.

[25]  Max Welling,et al.  Group Equivariant Convolutional Networks , 2016, ICML.

[26]  Alexander S. Ecker,et al.  A rotation-equivariant convolutional neural network model of primary visual cortex , 2018, ICLR.

[27]  Sepp Hochreiter,et al.  Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) , 2015, ICLR.

[28]  Yee Whye Teh,et al.  Conditional Neural Processes , 2018, ICML.

[29]  Dumitru Erhan,et al.  Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[30]  Max Welling,et al.  Rotation Equivariant CNNs for Digital Pathology , 2018, MICCAI.

[31]  D. Hubel,et al.  Receptive fields, binocular interaction and functional architecture in the cat's visual cortex , 1962, The Journal of physiology.

[32]  J. DiCarlo,et al.  Using goal-driven deep learning models to understand sensory cortex , 2016, Nature Neuroscience.

[33]  Maurice Weiler,et al.  Learning Steerable Filters for Rotation Equivariant CNNs , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[34]  François Chollet,et al.  Xception: Deep Learning with Depthwise Separable Convolutions , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).