暂无分享,去创建一个
Christos Thrampoulidis | Samet Oymak | Ganesh Ramachandra Kini | Orestis Paraskevas | Samet Oymak | Christos Thrampoulidis | Orestis Paraskevas
[1] R. Prentice,et al. Commentary on Andersen and Gill's "Cox's Regression Model for Counting Processes: A Large Sample Study" , 1982 .
[2] Y. Gordon. Some inequalities for Gaussian processes and applications , 1985 .
[3] C. Manski,et al. The Logit Model and Response-Based Samples , 1989 .
[4] John Shawe-Taylor,et al. Optimizing Classifers for Imbalanced Training Sets , 1998, NIPS.
[5] Edward Y. Chang,et al. Class-Boundary Alignment for Imbalanced Dataset Learning , 2003 .
[6] Ji Zhu,et al. Margin Maximizing Loss Functions , 2003, NIPS.
[7] D. Donoho,et al. Neighborliness of randomly projected simplices in high dimensions. , 2005, Proceedings of the National Academy of Sciences of the United States of America.
[8] M. Rudelson,et al. Sparse reconstruction by convex relaxation: Fourier and Gaussian measurements , 2006, 2006 40th Annual Conference on Information Sciences and Systems.
[9] Mihailo Stojnic,et al. Various thresholds for ℓ1-optimization in compressed sensing , 2009, ArXiv.
[10] Mihailo Stojnic,et al. Block-length dependent thresholds in block-sparse compressed sensing , 2009, ArXiv.
[11] Andrea Montanari,et al. Message-passing algorithms for compressed sensing , 2009, Proceedings of the National Academy of Sciences.
[12] Toon Calders,et al. Building Classifiers with Independency Constraints , 2009, 2009 IEEE International Conference on Data Mining Workshops.
[13] Nuno Vasconcelos,et al. Risk minimization, probability elicitation, and cost-sensitive SVMs , 2010, ICML.
[14] Andrea Montanari,et al. The dynamics of message passing on dense graphs, with applications to compressed sensing , 2010, 2010 IEEE International Symposium on Information Theory.
[15] Andrea Montanari,et al. The Noise-Sensitivity Phase Transition in Compressed Sensing , 2010, IEEE Transactions on Information Theory.
[16] Martin J. Wainwright,et al. Early stopping for non-parametric regression: An optimal data-dependent stopping rule , 2011, 2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton).
[17] Andrea Montanari,et al. The LASSO Risk for Gaussian Matrices , 2010, IEEE Transactions on Information Theory.
[18] Pablo A. Parrilo,et al. The Convex Geometry of Linear Inverse Problems , 2010, Foundations of Computational Mathematics.
[19] Christos Thrampoulidis,et al. The squared-error of generalized LASSO: A precise analysis , 2013, 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton).
[20] Mihailo Stojnic. Upper-bounding ℓ1-optimization weak thresholds , 2013, ArXiv.
[21] Mihailo Stojnic,et al. A framework to characterize performance of LASSO algorithms , 2013, ArXiv.
[22] Mihailo Stojnic,et al. A performance analysis framework for SOCP algorithms in noisy compressed sensing , 2013, ArXiv.
[23] P. Bickel,et al. Optimal M-estimation in high-dimensional regression , 2013, Proceedings of the National Academy of Sciences.
[24] Andrea Montanari,et al. High dimensional robust M-estimation: asymptotic variance via approximate message passing , 2013, Probability Theory and Related Fields.
[25] Andrea Montanari,et al. Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising , 2011, IEEE Transactions on Information Theory.
[26] Joel A. Tropp,et al. Living on the edge: phase transitions in convex programs with random data , 2013, 1303.6672.
[27] D. Donoho,et al. Variance Breakdown of Huber (M)-estimators: $n/p \rightarrow m \in (1,\infty)$ , 2015, 1503.02106.
[28] Christos Thrampoulidis,et al. Regularized Linear Regression: A Precise Analysis of the Estimation Error , 2015, COLT.
[29] S. Frick,et al. Compressed Sensing , 2014, Computer Vision, A Reference Guide.
[30] Christos Thrampoulidis,et al. LASSO with Non-linear Measurements is Equivalent to One With Linear Measurements , 2015, NIPS.
[31] Joel A. Tropp,et al. Universality laws for randomized dimension reduction, with applications , 2015, ArXiv.
[32] Andrew D. Selbst,et al. Big Data's Disparate Impact , 2016 .
[33] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[34] Xiaogang Wang,et al. Factors in Finetuning Deep Model for Object Detection with Long-Tail Distribution , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[35] Nathan Srebro,et al. Equality of Opportunity in Supervised Learning , 2016, NIPS.
[36] Krishna P. Gummadi,et al. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.
[37] Kaiming He,et al. Focal Loss for Dense Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[38] Ameet Talwalkar,et al. Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization , 2016, J. Mach. Learn. Res..
[39] Luca Antiga,et al. Automatic differentiation in PyTorch , 2017 .
[40] Kilian Q. Weinberger,et al. On Calibration of Modern Neural Networks , 2017, ICML.
[41] Jon M. Kleinberg,et al. Inherent Trade-Offs in the Fair Determination of Risk Scores , 2016, ITCS.
[42] Mikhail Belkin,et al. To understand deep learning we need to understand kernel learning , 2018, ICML.
[43] Gang Niu,et al. Does Distributionally Robust Supervised Learning Give Robust Classifiers? , 2016, ICML.
[44] Shai Ben-David,et al. Empirical Risk Minimization under Fairness Constraints , 2018, NeurIPS.
[45] Nathan Srebro,et al. The Implicit Bias of Gradient Descent on Separable Data , 2017, J. Mach. Learn. Res..
[46] Matt Olfat,et al. Spectral Algorithms for Computing Fair Support Vector Machines , 2017, AISTATS.
[47] Matus Telgarsky,et al. Risk and parameter convergence of logistic regression , 2018, ArXiv.
[48] Nathan Srebro,et al. Implicit Bias of Gradient Descent on Linear Convolutional Networks , 2018, NeurIPS.
[49] Babak Hassibi,et al. A Precise Analysis of PhaseMax in Phase Retrieval , 2018, 2018 IEEE International Symposium on Information Theory (ISIT).
[50] Christos Thrampoulidis,et al. Symbol Error Rate Performance of Box-Relaxation Decoders in Massive MIMO , 2018, IEEE Transactions on Signal Processing.
[51] Atsuto Maki,et al. A systematic study of the class imbalance problem in convolutional neural networks , 2017, Neural Networks.
[52] Christos Thrampoulidis,et al. Precise Error Analysis of Regularized $M$ -Estimators in High Dimensions , 2016, IEEE Transactions on Information Theory.
[53] Jian Cheng,et al. Additive Margin Softmax for Face Verification , 2018, IEEE Signal Processing Letters.
[54] E. Candès,et al. The phase transition for the existence of the maximum likelihood estimate in high-dimensional logistic regression , 2018, The Annals of Statistics.
[55] Noureddine El Karoui,et al. On the impact of predictor geometry on the performance on high-dimensional ridge-regularized generalized robust regression estimators , 2018 .
[56] Ramji Venkataramanan,et al. Finite Sample Analysis of Approximate Message Passing Algorithms , 2016, IEEE Transactions on Information Theory.
[57] Mohammed Bennamoun,et al. Cost-Sensitive Learning of Deep Feature Representations From Imbalanced Data , 2015, IEEE Transactions on Neural Networks and Learning Systems.
[58] Andrea Montanari,et al. A mean field view of the landscape of two-layer neural networks , 2018, Proceedings of the National Academy of Sciences.
[59] Zhenyu Liao,et al. A Large Scale Analysis of Logistic Regression: Asymptotic Performance and New Insights , 2019, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[60] A. Montanari,et al. The generalization error of max-margin linear classifiers: High-dimensional asymptotics in the overparametrized regime , 2019 .
[61] Babak Hassibi,et al. The Impact of Regularization on High-dimensional Logistic Regression , 2019, NeurIPS.
[62] Babak Hassibi,et al. Universality in Learning from Linear Measurements , 2019, NeurIPS.
[63] Yang Song,et al. Class-Balanced Loss Based on Effective Number of Samples , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[64] Nathan Srebro,et al. Convergence of Gradient Descent on Separable Data , 2018, AISTATS.
[65] Christos Thrampoulidis,et al. A Model of Double Descent for High-dimensional Binary Linear Classification , 2019, Information and Inference: A Journal of the IMA.
[66] Stella X. Yu,et al. Large-Scale Long-Tailed Recognition in an Open World , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[67] Percy Liang,et al. Distributionally Robust Language Modeling , 2019, EMNLP.
[68] A. Maleki,et al. Does SLOPE outperform bridge regression? , 2019, ArXiv.
[69] E. Candès,et al. A modern maximum-likelihood theory for high-dimensional logistic regression , 2018, Proceedings of the National Academy of Sciences.
[70] Babak Hassibi,et al. Stochastic Gradient/Mirror Descent: Minimax Optimality and Implicit Regularization , 2018, ICLR.
[71] Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization , 2019, ArXiv.
[72] Zachary C. Lipton,et al. What is the Effect of Importance Weighting in Deep Learning? , 2018, ICML.
[73] Colin Wei,et al. Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss , 2019, NeurIPS.
[74] Samet Oymak,et al. Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path? , 2018, ICML.
[75] Ben Hutchinson,et al. 50 Years of Test (Un)fairness: Lessons for Machine Learning , 2018, FAT.
[76] R. C. Williamson,et al. Fairness risk measures , 2019, ICML.
[77] Nathan Srebro,et al. Stochastic Gradient Descent on Separable Data: Exact Convergence with a Fixed Learning Rate , 2018, AISTATS.
[78] J. Zico Kolter,et al. A Continuous-Time View of Early Stopping for Least Squares Regression , 2018, AISTATS.
[79] Mikhail Belkin,et al. Does data interpolation contradict statistical optimality? , 2018, AISTATS.
[80] Taghi M. Khoshgoftaar,et al. Survey on deep learning with class imbalance , 2019, J. Big Data.
[81] Tengyuan Liang,et al. A Precise High-Dimensional Asymptotic Theory for Boosting and Min-L1-Norm Interpolated Classifiers , 2020, SSRN Electronic Journal.
[82] Junmo Kim,et al. Adjusting Decision Boundary for Class Imbalanced Learning , 2019, IEEE Access.
[83] Stefan Steinerberger,et al. Neural Collapse with Cross-Entropy Loss , 2020, ArXiv.
[84] Pang Wei Koh,et al. An Investigation of Why Overparameterization Exacerbates Spurious Correlations , 2020, ICML.
[85] De-Chuan Zhan,et al. Identifying and Compensating for Feature Deviation in Imbalanced Deep Learning , 2020, ArXiv.
[86] Yue M. Lu. A Precise Performance Analysis of Learning with Random Features , 2020 .
[87] Yuan Zhao,et al. On the Role of Dataset Quality and Heterogeneity in Model Confidence , 2020, ArXiv.
[88] Francis Bach,et al. Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss , 2020, COLT.
[89] Yue M. Lu,et al. Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization , 2020, NeurIPS.
[90] Junjie Yan,et al. Equalization Loss for Long-Tailed Object Recognition , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[91] Florent Krzakala,et al. The role of regularization in classification of high-dimensional noisy Gaussian mixture , 2020, ICML.
[92] Ross B. Girshick,et al. Focal Loss for Dense Object Detection , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[93] Matus Telgarsky,et al. Gradient descent follows the regularization path for general losses , 2020, COLT.
[94] David L. Donoho,et al. Prevalence of neural collapse during the terminal phase of deep learning training , 2020, Proceedings of the National Academy of Sciences.
[95] Christos Thrampoulidis,et al. Analytic Study of Double Descent in Binary Classification: The Impact of Loss , 2020, 2020 IEEE International Symposium on Information Theory (ISIT).
[96] Xiu-Shen Wei,et al. BBN: Bilateral-Branch Network With Cumulative Learning for Long-Tailed Visual Recognition , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[97] David Duvenaud,et al. Optimizing Millions of Hyperparameters by Implicit Differentiation , 2019, AISTATS.
[98] Philip M. Long,et al. Benign overfitting in linear regression , 2019, Proceedings of the National Academy of Sciences.
[99] Mikhail Belkin,et al. Classification vs regression in overparameterized regimes: Does the loss function matter? , 2020, J. Mach. Learn. Res..
[100] Saining Xie,et al. Decoupling Representation and Classifier for Long-Tailed Recognition , 2019, ICLR.
[101] Christos Thrampoulidis,et al. Sharp Asymptotics and Optimal Performance for Inference in Binary Models , 2020, AISTATS.
[102] Sundeep Rangan,et al. Generalization Error of Generalized Linear Models in High Dimensions , 2020, ICML.
[103] Boaz Barak,et al. Deep double descent: where bigger models and more data hurt , 2019, ICLR.
[104] Dustin G. Mixon,et al. Neural collapse with unconstrained features , 2020, Sampling Theory, Signal Processing, and Data Analysis.
[105] Mikhail Belkin,et al. Two models of double descent for weak features , 2019, SIAM J. Math. Data Sci..
[106] Yue M. Lu,et al. A Precise Performance Analysis of Learning with Random Features , 2020, ArXiv.
[107] Mohamed-Slim Alouini,et al. On the Precise Error Analysis of Support Vector Machines , 2020, IEEE Open Journal of Signal Processing.
[108] Matus Telgarsky,et al. Characterizing the implicit bias via a primal-dual analysis , 2019, ALT.
[109] Andrea Montanari,et al. The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve , 2019, Communications on Pure and Applied Mathematics.
[110] Ankit Singh Rawat,et al. Long-tail learning via logit adjustment , 2020, ICLR.
[111] Aleksander Madry,et al. Noise or Signal: The Role of Image Backgrounds in Object Recognition , 2020, ICLR.
[112] Christos Thrampoulidis,et al. Fundamental Limits of Ridge-Regularized Empirical Risk Minimization in High Dimensions , 2020, AISTATS.
[113] Christos Thrampoulidis,et al. Provable Benefits of Overparameterization in Model Compression: From Double Descent to Pruning Neural Networks , 2020, AAAI.
[114] Andrea Montanari,et al. The distribution of the Lasso: Uniform control over sparse balls and adaptive parameter tuning , 2018, The Annals of Statistics.
[115] Suresh Venkatasubramanian,et al. On the (im)possibility of fairness , 2016, ArXiv.
[116] Christos Thrampoulidis,et al. Phase Transitions for One-Vs-One and One-Vs-All Linear Separability in Multiclass Gaussian Mixtures , 2021, ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[117] A. Montanari,et al. Fundamental barriers to high-dimensional regression with convex penalties , 2019, The Annals of Statistics.
[118] Andrea Montanari,et al. Surprises in High-Dimensional Ridgeless Least Squares Interpolation , 2019, Annals of statistics.