暂无分享,去创建一个
[1] E. Lehmann. Testing Statistical Hypotheses , 1960 .
[2] Zhenyu Liao,et al. Classification Asymptotics in the Random Matrix Regime , 2018, 2018 26th European Signal Processing Conference (EUSIPCO).
[3] Andrea Montanari,et al. High dimensional robust M-estimation: asymptotic variance via approximate message passing , 2013, Probability Theory and Related Fields.
[4] Mohamed-Slim Alouini,et al. A Large Dimensional Analysis of Regularized Discriminant Analysis Classifiers , 2017 .
[5] Fabio Roli,et al. A Theoretical Analysis of Bagging as a Linear Combination of Classifiers , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[6] P. Bickel,et al. On robust regression with high-dimensional predictors , 2013, Proceedings of the National Academy of Sciences.
[7] Yuxin Chen,et al. The likelihood ratio test in high-dimensional logistic regression is asymptotically a rescaled Chi-square , 2017, Probability Theory and Related Fields.
[8] Stefan Wager,et al. High-Dimensional Asymptotics of Prediction: Ridge Regression and Classification , 2015, 1507.03003.
[9] Radford M. Neal. Pattern Recognition and Machine Learning , 2007, Technometrics.
[10] Yoav Freund,et al. A Short Introduction to Boosting , 1999 .
[11] M. Ledoux. The concentration of measure phenomenon , 2001 .
[12] E. Candès,et al. The phase transition for the existence of the maximum likelihood estimate in high-dimensional logistic regression , 2018, The Annals of Statistics.
[13] Surya Ganguli,et al. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks , 2013, ICLR.
[14] S. Portnoy. Asymptotic behavior of M-estimators of p regression parameters when p , 1985 .
[15] E. Candès,et al. A modern maximum-likelihood theory for high-dimensional logistic regression , 2018, Proceedings of the National Academy of Sciences.
[16] Nuno Vasconcelos,et al. On the Design of Loss Functions for Classification: theory, robustness to outliers, and SavageBoost , 2008, NIPS.
[17] Zhenyu Liao,et al. A Large Dimensional Analysis of Least Squares Support Vector Machines , 2017, IEEE Transactions on Signal Processing.
[18] Lorenzo Rosasco,et al. Are Loss Functions All the Same? , 2004, Neural Computation.
[19] J. W. Silverstein,et al. Spectral Analysis of Large Dimensional Random Matrices , 2009 .
[20] Shai Ben-David,et al. On the difficulty of approximately maximizing agreements , 2000, J. Comput. Syst. Sci..
[21] P. McCullagh,et al. Monograph on Statistics and Applied Probability , 1989 .
[22] Vladimir Vapnik,et al. Principles of Risk Minimization for Learning Theory , 1991, NIPS.