Neural Network Reliability Enhancement Approach Using Dropout Underutilization in GPU
暂无分享,去创建一个
[1] Dong Yu,et al. Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition , 2012, IEEE Transactions on Audio, Speech, and Language Processing.
[2] Walmir M. Caminhas,et al. A review of machine learning approaches to Spam filtering , 2009, Expert Syst. Appl..
[3] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[4] Martín Abadi,et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems , 2016, ArXiv.
[5] Geoffrey E. Hinton,et al. Replicated Softmax: an Undirected Topic Model , 2009, NIPS.
[6] Jürgen Schmidhuber,et al. Multi-column deep neural networks for image classification , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.
[7] Jürgen Schmidhuber,et al. Deep learning in neural networks: An overview , 2014, Neural Networks.
[8] Douglas M. Hawkins,et al. The Problem of Overfitting , 2004, J. Chem. Inf. Model..
[9] Tianyi David Han,et al. Reducing branch divergence in GPU programs , 2011, GPGPU-4.
[10] Yoshua Bengio,et al. Deep Sparse Rectifier Neural Networks , 2011, AISTATS.