Two Routes to Scalable Credit Assignment without Weight Symmetry
暂无分享,去创建一个
[1] Joel Z. Leibo,et al. How Important Is Weight Symmetry in Backpropagation? , 2015, AAAI.
[2] Sergio Gomez Colmenarejo,et al. TF-Replicator: Distributed Machine Learning for Researchers , 2019, ArXiv.
[3] Yoshua Bengio,et al. Difference Target Propagation , 2014, ECML/PKDD.
[4] J. F. Kolen,et al. Backpropagation without weight transport , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).
[5] Rafal Bogacz,et al. An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity , 2017, Neural Computation.
[6] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[7] Yoshua Bengio,et al. STDP-Compatible Approximation of Backpropagation in an Energy-Based Model , 2017, Neural Computation.
[8] Colin J. Akerman,et al. Random synaptic feedback weights support error backpropagation for deep learning , 2016, Nature Communications.
[9] Yoshua Bengio,et al. Algorithms for Hyper-Parameter Optimization , 2011, NIPS.
[10] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[11] Peter C. Humphreys,et al. Deep Learning without Weight Transport , 2019, NeurIPS.
[12] Pierre Baldi,et al. Learning in the machine: Random backpropagation and the deep learning channel , 2016, Artif. Intell..
[13] Surya Ganguli,et al. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks , 2013, ICLR.
[14] Stephen Grossberg,et al. Competitive Learning: From Interactive Activation to Adaptive Resonance , 1987, Cogn. Sci..
[15] Yoshua Bengio,et al. Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation , 2016, Front. Comput. Neurosci..
[16] Geoffrey E. Hinton,et al. Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures , 2018, NeurIPS.
[17] Konrad P. Körding,et al. Spike-based causal inference for weight alignment , 2020, ICLR.
[18] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[19] Ha Hong,et al. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance , 2015, The Journal of Neuroscience.
[20] Daniel Kunin,et al. Loss Landscapes of Regularized Linear Autoencoders , 2019, ICML.
[21] Yoshua Bengio,et al. Dendritic cortical microcircuits approximate the backpropagation algorithm , 2018, NeurIPS.
[22] Timothy P Lillicrap,et al. Towards deep learning with segregated dendrites , 2016, eLife.
[23] Konrad Paul Kording,et al. Spiking allows neurons to estimate their causal effect , 2018, bioRxiv.
[24] L. F. Abbott,et al. Feedback alignment in deep convolutional networks , 2018, ArXiv.
[25] Tomaso A. Poggio,et al. Biologically-plausible learning algorithms can scale to large datasets , 2018, ICLR.
[26] E. Oja. Simplified neuron model as a principal component analyzer , 1982, Journal of mathematical biology.
[27] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[28] Yoshua Bengio,et al. How Auto-Encoders Could Provide Credit Assignment in Deep Networks via Target Propagation , 2014, ArXiv.
[29] Francis Crick,et al. The recent excitement about neural networks , 1989, Nature.
[30] Leon A. Gatys,et al. Deep convolutional models improve predictions of macaque V1 responses to natural images , 2019, PLoS Comput. Biol..
[31] Arild Nøkland,et al. Direct Feedback Alignment Provides Learning in Deep Neural Networks , 2016, NIPS.
[32] Ha Hong,et al. Performance-optimized hierarchical models predict neural responses in higher visual cortex , 2014, Proceedings of the National Academy of Sciences.
[33] Xiaohui Xie,et al. Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network , 2003, Neural Computation.