Improved Training for Self Training by Confidence Assessments
暂无分享,去创建一个
[1] Rayid Ghani,et al. Analyzing the effectiveness and applicability of co-training , 2000, CIKM '00.
[2] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[3] Harri Valpola,et al. From neural PCA to deep unsupervised learning , 2014, ArXiv.
[4] Lourdes Agapito,et al. Semi-supervised Learning Using an Unsupervised Atlas , 2014, ECML/PKDD.
[5] Nitish Srivastava,et al. Improving neural networks by preventing co-adaptation of feature detectors , 2012, ArXiv.
[6] Peter Kontschieder,et al. Loss Max-Pooling for Semantic Image Segmentation , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[7] Geoffrey E. Hinton,et al. Regularizing Neural Networks by Penalizing Confident Output Distributions , 2017, ICLR.
[8] Bharath Hariharan,et al. Low-shot visual object recognition , 2016, ArXiv.
[9] Robert P. W. Duin,et al. Limits on the majority vote accuracy in classifier fusion , 2003, Pattern Analysis & Applications.
[10] Tapani Raiko,et al. Semi-supervised Learning with Ladder Networks , 2015, NIPS.
[11] Dong-Hyun Lee,et al. Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks , 2013 .
[12] Massih-Reza Amini,et al. Semi Supervised Logistic Regression , 2002, ECAI.
[13] Yg,et al. Dropout as a Bayesian Approximation : Insights and Applications , 2015 .
[14] Yoshua Bengio,et al. Semi-supervised Learning by Entropy Minimization , 2004, CAP.
[15] Bharath Hariharan,et al. Low-Shot Visual Recognition by Shrinking and Hallucinating Features , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[16] Sergei Vassilvitskii,et al. k-means++: the advantages of careful seeding , 2007, SODA '07.
[17] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.