暂无分享,去创建一个
Ivan Oseledets | Valentin Khrulkov | Alexander Novikov | Alexander Novikov | I. Oseledets | Valentin Khrulkov
[1] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Richard A. Harshman,et al. Foundations of the PARAFAC procedure: Models and conditions for an "explanatory" multi-model factor analysis , 1970 .
[3] W. Hackbusch. Tensor Spaces and Numerical Tensor Calculus , 2012, Springer Series in Computational Mathematics.
[4] Razvan Pascanu,et al. On the Number of Linear Regions of Deep Neural Networks , 2014, NIPS.
[5] Jürgen Schmidhuber,et al. Learning to forget: continual prediction with LSTM , 1999 .
[6] Lawrence D. Jackel,et al. Handwritten Digit Recognition with a Back-Propagation Network , 1989, NIPS.
[7] George Cybenko,et al. Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..
[8] Lukás Burget,et al. Extensions of recurrent neural network language model , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[9] Yoshua Bengio,et al. Shallow vs. Deep Sum-Product Networks , 2011, NIPS.
[10] Mateusz Michalek,et al. The Hackbusch conjecture on tensor formats , 2015 .
[11] James Martens,et al. On the Expressive Efficiency of Sum Product Networks , 2014, ArXiv.
[12] Nadav Cohen,et al. On the Expressive Power of Deep Learning: A Tensor Analysis , 2015, COLT 2016.
[13] Gaël Varoquaux,et al. Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..
[14] Geoffrey E. Hinton,et al. On the importance of initialization and momentum in deep learning , 2013, ICML.
[15] Amnon Shashua,et al. Convolutional Rectifier Networks as Generalized Tensor Decompositions , 2016, ICML.
[16] Ying Zhang,et al. On Multiplicative Integration with Recurrent Neural Networks , 2016, NIPS.
[17] Yisong Yue,et al. Long-term Forecasting using Higher Order Tensor RNNs , 2017 .
[18] Yoshua Bengio,et al. Convolutional networks for images, speech, and time series , 1998 .
[19] Surya Ganguli,et al. On the Expressive Power of Deep Neural Networks , 2016, ICML.
[20] Lars Grasedyck,et al. Hierarchical Singular Value Decomposition of Tensors , 2010, SIAM J. Matrix Anal. Appl..
[21] Kurt Hornik,et al. Multilayer feedforward networks are universal approximators , 1989, Neural Networks.
[22] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[23] Geoffrey E. Hinton,et al. Speech recognition with deep recurrent neural networks , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[24] David J. Schwab,et al. Supervised Learning with Tensor Networks , 2016, NIPS.
[25] Wolfgang Hackbusch,et al. An Introduction to Hierarchical (H-) Rank and TT-Rank of Tensors with Examples , 2011, Comput. Methods Appl. Math..
[26] Naftali Tishby,et al. Opening the Black Box of Deep Neural Networks via Information , 2017, ArXiv.
[27] Ivan Oseledets,et al. Tensor-Train Decomposition , 2011, SIAM J. Sci. Comput..
[28] J. Chang,et al. Analysis of individual differences in multidimensional scaling via an n-way generalization of “Eckart-Young” decomposition , 1970 .
[29] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[30] Ohad Shamir,et al. The Power of Depth for Feedforward Neural Networks , 2015, COLT.