On Kernel Method–Based Connectionist Models and Supervised Deep Learning Without Backpropagation
暂无分享,去创建一个
[1] Manik Varma,et al. More generality in efficient multiple kernel learning , 2009, ICML '09.
[2] Yichuan Tang,et al. Deep Learning using Linear Support Vector Machines , 2013, 1306.0239.
[3] N. Cristianini,et al. On Kernel-Target Alignment , 2001, NIPS.
[4] Jooyoung Park,et al. Universal Approximation Using Radial-Basis-Function Networks , 1991, Neural Computation.
[5] Leon A. Gatys,et al. A Neural Algorithm of Artistic Style , 2015, ArXiv.
[6] Zenglin Xu,et al. An Extended Level Method for Efficient Multiple Kernel Learning , 2008, NIPS.
[7] Ivor W. Tsang,et al. Two-Layer Multiple Kernel Learning , 2011, AISTATS.
[8] Matthias W. Seeger,et al. Using the Nyström Method to Speed Up Kernel Machines , 2000, NIPS.
[9] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[10] D. Broomhead,et al. Radial Basis Functions, Multi-Variable Functional Interpolation and Adaptive Networks , 1988 .
[11] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[12] Geoffrey E. Hinton. Reducing the Dimensionality of Data with Neural , 2008 .
[13] Pascal Vincent,et al. Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[14] Yuesheng Xu,et al. Universal Kernels , 2006, J. Mach. Learn. Res..
[15] Thomas Hofmann,et al. Greedy Layer-Wise Training of Deep Networks , 2007 .
[16] Nello Cristianini,et al. Learning the Kernel Matrix with Semidefinite Programming , 2002, J. Mach. Learn. Res..
[17] Matt J. Kusner,et al. Deep Manifold Traversal: Changing Labels with Convolutional Features , 2015, ArXiv.
[18] Yoshua Bengio,et al. Difference Target Propagation , 2014, ECML/PKDD.
[19] Simone Scardapane,et al. Kafnets: kernel-based non-parametric activation functions for neural networks , 2017, Neural Networks.
[20] Tomaso Poggio,et al. Learning Functions: When Is Deep Better Than Shallow , 2016, 1603.00988.
[21] Alex Graves,et al. Decoupled Neural Interfaces using Synthetic Gradients , 2016, ICML.
[22] Peter L. Bartlett,et al. Rademacher and Gaussian Complexities: Risk Bounds and Structural Results , 2003, J. Mach. Learn. Res..
[23] Yee Whye Teh,et al. A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.
[24] Benjamin Schrauwen,et al. Recurrent Kernel Machines: Computing with Infinite Echo State Networks , 2012, Neural Computation.
[25] Yann LeCun,et al. Large-scale Learning with SVM and Convolutional for Generic Object Categorization , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).
[26] A. Atiya,et al. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond , 2005, IEEE Transactions on Neural Networks.
[27] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[28] Benjamin Recht,et al. Random Features for Large-Scale Kernel Machines , 2007, NIPS.
[29] Lawrence K. Saul,et al. Kernel Methods for Deep Learning , 2009, NIPS.
[30] Johan A. K. Suykens,et al. Deep Restricted Kernel Machines Using Conjugate Feature Duality , 2017, Neural Computation.
[31] Andrew Gordon Wilson,et al. Deep Kernel Learning , 2015, AISTATS.
[32] Joachim M. Buhmann,et al. Kickback Cuts Backprop's Red-Tape: Biologically Plausible Credit Assignment in Neural Networks , 2014, AAAI.
[33] Pascal Vincent,et al. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion , 2010, J. Mach. Learn. Res..
[34] Amparo Alonso-Betanzos,et al. Linear-least-squares initialization of multilayer perceptrons through backpropagation of the desired response , 2005, IEEE Transactions on Neural Networks.
[35] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[36] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[37] Jascha Sohl-Dickstein,et al. SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability , 2017, NIPS.
[38] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[39] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[40] Yunmei Chen,et al. Learning Multiple Levels of Representations with Kernel Machines , 2018, ArXiv.
[41] Yoshua Bengio,et al. How Auto-Encoders Could Provide Credit Assignment in Deep Networks via Target Propagation , 2014, ArXiv.
[42] Nello Cristianini,et al. An Introduction to Support Vector Machines and Other Kernel-based Learning Methods , 2000 .
[43] Ji Feng,et al. Deep Forest: Towards An Alternative to Deep Neural Networks , 2017, IJCAI.
[44] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[45] Johan A. K. Suykens,et al. Training multilayer perceptron classifiers based on a modified support vector method , 1999, IEEE Trans. Neural Networks.
[46] Tie-Yan Liu,et al. On the Depth of Deep Neural Networks: A Theoretical View , 2015, AAAI.
[47] Christian Lebiere,et al. The Cascade-Correlation Learning Architecture , 1989, NIPS.
[48] Yoshua Bengio,et al. An empirical evaluation of deep architectures on problems with many factors of variation , 2007, ICML '07.
[49] Vladimir N. Vapnik,et al. The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.
[50] N. Aronszajn. Theory of Reproducing Kernels. , 1950 .
[51] Shang-Liang Chen,et al. Orthogonal least squares learning algorithm for radial basis function networks , 1991, IEEE Trans. Neural Networks.
[52] Alexander Zien,et al. lp-Norm Multiple Kernel Learning , 2011, J. Mach. Learn. Res..
[53] Ethem Alpaydin,et al. Multiple Kernel Learning Algorithms , 2011, J. Mach. Learn. Res..
[54] Yoshua Bengio,et al. Deep Sparse Rectifier Neural Networks , 2011, AISTATS.
[55] Geoffrey E. Hinton,et al. Reducing the Dimensionality of Data with Neural Networks , 2006, Science.
[56] Michael I. Jordan,et al. Multiple kernel learning, conic duality, and the SMO algorithm , 2004, ICML.
[57] G. Pisier. The volume of convex bodies and Banach space geometry , 1989 .
[58] Corinna Cortes,et al. Support-Vector Networks , 1995, Machine Learning.
[59] Miguel Á. Carreira-Perpiñán,et al. Distributed optimization of deeply nested systems , 2012, AISTATS.
[60] Yoshua. Bengio,et al. Learning Deep Architectures for AI , 2007, Found. Trends Mach. Learn..
[61] Cordelia Schmid,et al. Convolutional Kernel Networks , 2014, NIPS.
[62] Sebastian Nowozin,et al. Infinite Kernel Learning , 2008, NIPS 2008.
[63] M. Kloft,et al. l p -Norm Multiple Kernel Learning , 2011 .
[64] Mandar Kulkarni,et al. Layer-wise training of deep networks using kernel similarity , 2017, ArXiv.
[65] Geoffrey E. Hinton,et al. Distributed Representations , 1986, The Philosophy of Artificial Intelligence.
[66] Jianxin Li,et al. Stacked Kernel Network , 2017, ArXiv.
[67] Bernhard Schölkopf,et al. A Generalized Representer Theorem , 2001, COLT/EuroCOLT.
[68] W. Pitts,et al. A Logical Calculus of the Ideas Immanent in Nervous Activity (1943) , 2021, Ideas That Created the Future.