Joint Optimization Framework for Learning with Noisy Labels
暂无分享,去创建一个
Kiyoharu Aizawa | Toshihiko Yamasaki | Daiki Tanaka | Daiki Ikami | K. Aizawa | T. Yamasaki | Daiki Ikami | Daiki Tanaka
[1] Aritra Ghosh,et al. Robust Loss Functions under Label Noise for Deep Neural Networks , 2017, AAAI.
[2] Sergei Vassilvitskii,et al. k-means++: the advantages of careful seeding , 2007, SODA '07.
[3] Arash Vahdat,et al. Toward Robustness against Label Noise in Training Deep Discriminative Neural Networks , 2017, NIPS.
[4] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[5] Joan Bruna,et al. Training Convolutional Networks with Noisy Labels , 2014, ICLR 2014.
[6] Gholamreza Haffari,et al. Analysis of Semi-Supervised Learning with the Yarowsky Algorithm , 2007, UAI.
[7] Kenta Oono,et al. Chainer : a Next-Generation Open Source Framework for Deep Learning , 2015 .
[8] Xiaogang Wang,et al. Learning from massive noisy labeled data for image classification , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[10] J. Paul Brooks,et al. Support Vector Machines with the Ramp Loss and the Hard Margin Loss , 2011, Oper. Res..
[11] Richard Nock,et al. Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Yoshua Bengio,et al. A Closer Look at Memorization in Deep Networks , 2017, ICML.
[13] Aritra Ghosh,et al. Making risk minimization tolerant to label noise , 2014, Neurocomputing.
[14] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[15] Aditya Krishna Menon,et al. Learning with Symmetric Label Noise: The Importance of Being Unhinged , 2015, NIPS.
[16] Richard Nock,et al. Making Neural Networks Robust to Label Noise: a Loss Correction Approach , 2016, ArXiv.
[17] Masashi Sugiyama,et al. Learning Discrete Representations via Information Maximizing Self-Augmented Training , 2017, ICML.
[18] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[19] David A. Shamma,et al. The New Data and New Challenges in Multimedia Research , 2015, ArXiv.
[20] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[21] Matthew S. Nokleby,et al. Learning Deep Networks from Noisy Labels with Dropout Regularization , 2016, 2016 IEEE 16th International Conference on Data Mining (ICDM).
[22] Dong-Hyun Lee,et al. Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks , 2013 .
[23] Xiaojin Zhu,et al. --1 CONTENTS , 2006 .
[24] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[25] Dumitru Erhan,et al. Training Deep Neural Networks on Noisy Labels with Bootstrapping , 2014, ICLR.