暂无分享,去创建一个
Sivaraman Balakrishnan | Zachary C. Lipton | Saurabh Garg | Zachary Chase Lipton | Yifan Wu | Alexander J. Smola | Alex Smola | Sivaraman Balakrishnan | Yifan Wu | S. Garg
[1] Gilles Blanchard,et al. Semi-Supervised Novelty Detection , 2010, J. Mach. Learn. Res..
[2] Clayton Scott,et al. A Rate of Convergence for Mixture Proportion Estimation, with Application to Learning from Noisy Labels , 2015, AISTATS.
[3] Philip S. Yu,et al. Partially Supervised Classification of Text Documents , 2002, ICML.
[4] Rémi Gilleron,et al. Positive and Unlabeled Examples Help Learning , 1999, ALT.
[5] Martha White,et al. Nonparametric semi-supervised learning of class proportions , 2016, ArXiv.
[6] Marco Saerens,et al. Adjusting the Outputs of a Classifier to New a Priori Probabilities: A Simple Procedure , 2002, Neural Computation.
[7] Avanti Shrikumar,et al. Adapting to Label Shift with Bias-Corrected Calibration , 2019 .
[8] Alexander J. Smola,et al. Detecting and Correcting for Label Shift with Black Box Predictors , 2018, ICML.
[9] Gang Niu,et al. Convex Formulation for Learning from Positive and Unlabeled Data , 2015, ICML.
[10] Christopher Potts,et al. Learning Word Vectors for Sentiment Analysis , 2011, ACL.
[11] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Masashi Sugiyama,et al. Class Prior Estimation from Positive and Unlabeled Data , 2014, IEICE Trans. Inf. Syst..
[13] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[14] Sivaraman Balakrishnan,et al. A Unified View of Label Shift Estimation , 2020, NeurIPS.
[15] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[16] J. Kiefer,et al. Asymptotic Minimax Character of the Sample Distribution Function and of the Classical Multinomial Estimator , 1956 .
[17] Gang Niu,et al. Analysis of Learning from Positive and Unlabeled Data , 2014, NIPS.
[18] Neil D. Lawrence,et al. When Training and Test Sets Are Different: Characterizing Learning Transfer , 2009 .
[19] Bing Liu,et al. Learning with Positive and Unlabeled Examples Using Weighted Logistic Regression , 2003, ICML.
[20] Jesse Davis,et al. Estimating the Class Prior in Positive and Unlabeled Data Through Decision Tree Induction , 2018, AAAI.
[21] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[22] Zachary C. Lipton,et al. What is the Effect of Importance Weighting in Deep Learning? , 2018, ICML.
[23] Dmitry Ivanov. DEDPUL: Difference-of-Estimated-Densities-based Positive-Unlabeled Learning , 2019, 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA).
[24] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[25] Philip S. Yu,et al. Building text classifiers using positive and unlabeled examples , 2003, Third IEEE International Conference on Data Mining.
[26] Rémi Gilleron,et al. Learning from positive and unlabeled examples , 2000, Theor. Comput. Sci..
[27] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[28] Ambuj Tewari,et al. Mixture Proportion Estimation via Kernel Embeddings of Distributions , 2016, ICML.
[29] Clayton Scott,et al. Class Proportion Estimation with Application to Multiclass Anomaly Rejection , 2013, AISTATS.
[30] Frann Cois Denis,et al. PAC Learning from Positive Statistical Queries , 1998, ALT.
[31] Kamyar Azizzadenesheli,et al. Regularized Learning for Domain Adaptation under Label Shifts , 2019, ICLR.
[32] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[33] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[34] Masashi Sugiyama,et al. Semi-Supervised Learning of Class Balance under Class-Prior Change by Distribution Matching , 2012, ICML.
[35] Gang Niu,et al. Positive-Unlabeled Learning with Non-Negative Risk Estimator , 2017, NIPS.
[36] Lysandre Debut,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[37] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[38] Stephan Günnemann,et al. Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift , 2018, NeurIPS.
[39] R. .,et al. Exploiting geometric structure in mixture proportion estimation with generalised Blanchard-Lee-Scott estimators , 2019 .
[40] Charles Elkan,et al. Learning classifiers from only positive and unlabeled data , 2008, KDD.
[41] Jesse Davis,et al. Learning from positive and unlabeled data: a survey , 2018, Machine Learning.
[42] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.