Nonparametric Regression on Low-Dimensional Manifolds using Deep ReLU Networks
暂无分享,去创建一个
[1] Stephen Smale,et al. Finding the Homology of Submanifolds with High Confidence from Random Samples , 2008, Discret. Comput. Geom..
[2] G. Petrova,et al. Nonlinear Approximation and (Deep) ReLU\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm {ReLU}$$\end{document} , 2019, Constructive Approximation.
[3] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[4] Erik Cambria,et al. Recent Trends in Deep Learning Based Natural Language Processing , 2017, IEEE Comput. Intell. Mag..
[5] R. DeVore,et al. Optimal nonlinear approximation , 1989 .
[6] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[7] Alessandro Rinaldo,et al. Estimating the reach of a manifold , 2017, Electronic Journal of Statistics.
[8] Samory Kpotufe,et al. k-NN Regression Adapts to Local Intrinsic Dimension , 2011, NIPS.
[9] Andrew L. Maas. Rectifier Nonlinearities Improve Neural Network Acoustic Models , 2013 .
[10] Quoc V. Le,et al. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks , 2019, ICML.
[11] Alexandre B. Tsybakov,et al. Introduction to Nonparametric Estimation , 2008, Springer series in statistics.
[12] W. Fischer,et al. Sphere Packings, Lattices and Groups , 1990 .
[13] Allan Pinkus,et al. Multilayer Feedforward Networks with a Non-Polynomial Activation Function Can Approximate Any Function , 1991, Neural Networks.
[14] Jon A. Wellner,et al. Weak Convergence and Empirical Processes: With Applications to Statistics , 1996 .
[15] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[16] Trevor Darrell,et al. Fully Convolutional Networks for Semantic Segmentation , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[17] Jianqing Fan,et al. Local polynomial modelling and its applications , 1994 .
[18] Yongdai Kim,et al. Fast convergence rates of deep neural networks for classification , 2018, Neural Networks.
[19] H. N. Mhaskar,et al. Neural Networks for Optimal Approximation of Smooth and Analytic Functions , 1996, Neural Computation.
[20] Sergey Levine,et al. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).
[21] Charles K. Chui,et al. Deep Nets for Local Manifold Learning , 2016, Front. Appl. Math. Stat..
[22] Alexander Cloninger,et al. Provable approximation properties for deep neural networks , 2015, ArXiv.
[23] Geoffrey E. Hinton,et al. Speech recognition with deep recurrent neural networks , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[24] John M. Lee. Riemannian Manifolds: An Introduction to Curvature , 1997 .
[25] Adam Krzyzak,et al. A Distribution-Free Theory of Nonparametric Regression , 2002, Springer series in statistics.
[26] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[27] P. Bickel,et al. Local polynomial regression on unknown manifolds , 2007, 0708.0983.
[28] Andrew R. Barron,et al. Complexity Regularization with Application to Artificial Neural Networks , 1991 .
[29] Chong Wang,et al. Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin , 2015, ICML.
[30] C. Chui,et al. Approximation by ridge functions and neural networks with one hidden layer , 1992 .
[31] Jing Zhou,et al. Hate Speech Detection with Comment Embeddings , 2015, WWW.
[32] S T Roweis,et al. Nonlinear dimensionality reduction by locally linear embedding. , 2000, Science.
[33] Adam Krzyzak,et al. Adaptive regression estimation with multilayer feedforward neural networks , 2004, International Symposium onInformation Theory, 2004. ISIT 2004. Proceedings..
[34] M. Kohler,et al. Nonasymptotic Bounds on the L2 Error of Neural Network Regression Estimates , 2006 .
[35] Adam Krzyżak,et al. Nonparametric Regression Based on Hierarchical Interaction Models , 2017, IEEE Transactions on Information Theory.
[36] Haipeng Shen,et al. Artificial intelligence in healthcare: past, present and future , 2017, Stroke and Vascular Neurology.
[37] J. Tenenbaum,et al. A global geometric framework for nonlinear dimensionality reduction. , 2000, Science.
[38] Stanley Osher,et al. Low Dimensional Manifold Model for Image Processing , 2017, SIAM J. Imaging Sci..
[39] Yoshua Bengio,et al. Deep Sparse Rectifier Neural Networks , 2011, AISTATS.
[40] Stergios B. Fotopoulos,et al. All of Nonparametric Statistics , 2007, Technometrics.
[41] R. Coifman,et al. Diffusion Wavelets , 2004 .
[42] Ken-ichi Funahashi,et al. On the approximate realization of continuous mappings by neural networks , 1989, Neural Networks.
[43] Boris Hanin,et al. Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations , 2017, Mathematics.
[44] N. Altman. An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression , 1992 .
[45] Johannes Schmidt-Hieber,et al. Nonparametric regression using deep neural networks with ReLU activation function , 2017, The Annals of Statistics.
[46] Loring W. Tu,et al. An introduction to manifolds , 2007 .
[47] Sanjeev Khudanpur,et al. Librispeech: An ASR corpus based on public domain audio books , 2015, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[48] Michael Kohler,et al. Analysis of the rate of convergence of least squares neural network regression estimates in case of measurement errors , 2011, Neural Networks.
[49] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[50] Taiji Suzuki,et al. Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality , 2018, ICLR.
[51] Andrew R. Barron,et al. Universal approximation bounds for superpositions of a sigmoidal function , 1993, IEEE Trans. Inf. Theory.
[52] Liwei Wang,et al. The Expressive Power of Neural Networks: A View from the Width , 2017, NIPS.
[53] Geoffrey E. Hinton,et al. Reducing the Dimensionality of Data with Neural Networks , 2006, Science.
[54] B. Irie,et al. Capabilities of three-layered perceptrons , 1988, IEEE 1988 International Conference on Neural Networks.
[55] G. Wahba. Spline models for observational data , 1990 .
[56] Zuowei Shen,et al. Deep Network Approximation Characterized by Number of Neurons , 2019, Communications in Computational Physics.
[57] Ding-Xuan Zhou,et al. Universality of Deep Convolutional Neural Networks , 2018, Applied and Computational Harmonic Analysis.
[58] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[59] Tuo Zhao,et al. Efficient Approximation of Deep ReLU Networks for Functions on Low Dimensional Manifolds , 2019, NeurIPS.
[60] Vikas K. Garg,et al. Adaptivity to Local Smoothness and Dimension in Kernel Regression , 2013, NIPS.
[61] Yongdai Kim,et al. Smooth Function Approximation by Deep Neural Networks with General Activation Functions , 2019, Entropy.
[62] Yun Yang,et al. Minimax-optimal nonparametric regression in high dimensions , 2014, 1401.7278.
[63] Sanjeev Arora,et al. On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization , 2018, ICML.
[64] Daniel F. McCaffrey,et al. Convergence rates for single hidden layer feedforward networks , 1994, Neural Networks.
[65] George Cybenko,et al. Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..
[66] Enhua Wu,et al. Squeeze-and-Excitation Networks , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[67] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[68] Fei Wang,et al. Deep learning for healthcare: review, opportunities and challenges , 2018, Briefings Bioinform..