Gene expression prediction using deep neural networks
暂无分享,去创建一个
[1] Alex E. Lash,et al. Gene Expression Omnibus: NCBI gene expression and hybridization array data repository , 2002, Nucleic Acids Res..
[2] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[3] Pascal Vincent,et al. Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[4] Yoshua Bengio,et al. Deep Sparse Rectifier Neural Networks , 2011, AISTATS.
[5] Z. Bar-Joseph,et al. Using neural networks for reducing the dimensions of single-cell RNA-Seq data , 2017, Nucleic acids research.
[6] Rich Caruana,et al. Overfitting in Neural Nets: Backpropagation, Conjugate Gradient, and Early Stopping , 2000, NIPS.
[7] Derek C. Rose,et al. Deep Machine Learning - A New Frontier in Artificial Intelligence Research [Research Frontier] , 2010, IEEE Computational Intelligence Magazine.
[8] Georg Heigold,et al. An empirical study of learning rates in deep neural networks for speech recognition , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[9] Pierre Baldi,et al. Understanding Dropout , 2013, NIPS.
[10] Saleem A. Kassam,et al. Quantization Based on the Mean-Absolute-Error Criterion , 1978, IEEE Trans. Commun..
[11] Yoshua. Bengio,et al. Learning Deep Architectures for AI , 2007, Found. Trends Mach. Learn..
[12] E. Forgy,et al. Cluster analysis of multivariate data : efficiency versus interpretability of classifications , 1965 .
[13] E. Pierson,et al. ZIFA: Dimensionality reduction for zero-inflated single-cell gene expression analysis , 2015, Genome Biology.
[14] Boris Polyak. Some methods of speeding up the convergence of iteration methods , 1964 .
[15] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[16] Sepp Hochreiter,et al. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) , 2015, ICLR.
[17] Long Chen,et al. Dynamic load balancing on single- and multi-GPU systems , 2010, 2010 IEEE International Symposium on Parallel & Distributed Processing (IPDPS).
[18] Yi Li,et al. Gene expression inference with deep learning , 2015, bioRxiv.
[19] Y. Nesterov. A method for solving the convex programming problem with convergence rate O(1/k^2) , 1983 .
[20] Celso André R. de Sousa,et al. An overview on weight initialization methods for feedforward neural networks , 2016, 2016 International Joint Conference on Neural Networks (IJCNN).
[21] Yoshua Bengio,et al. Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.
[22] Pierre Baldi,et al. Autoencoders, Unsupervised Learning, and Deep Architectures , 2011, ICML Unsupervised and Transfer Learning.
[23] Pascal Vincent,et al. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion , 2010, J. Mach. Learn. Res..
[24] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[25] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[26] D. di Bernardo,et al. How to infer gene networks from expression profiles , 2007, Molecular systems biology.
[27] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[28] Paul A Clemons,et al. The Connectivity Map: Using Gene-Expression Signatures to Connect Small Molecules, Genes, and Disease , 2006, Science.
[29] Guang-Bin Huang,et al. Learning capability and storage capacity of two-hidden-layer feedforward networks , 2003, IEEE Trans. Neural Networks.
[30] Aurélien Géron,et al. Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems , 2017 .
[31] Marc'Aurelio Ranzato,et al. Building high-level features using large scale unsupervised learning , 2011, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.