Assessing Shape Bias Property of Convolutional Neural Networks
暂无分享,去创建一个
Radha Poovendran | Hossein Hosseini | Baicen Xiao | Mayoore S. Jaiswal | Mayoore Jaiswal | R. Poovendran | Baicen Xiao | Hossein Hosseini
[1] Xiang Sean Zhou,et al. How intelligent are convolutional neural networks? , 2017, ArXiv.
[2] Ruslan Salakhutdinov,et al. Path-SGD: Path-Normalized Optimization in Deep Neural Networks , 2015, NIPS.
[3] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[4] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[5] L. Ljung,et al. Overtraining, regularization and searching for a minimum, with application to neural networks , 1995 .
[6] Kurt Hornik,et al. Multilayer feedforward networks are universal approximators , 1989, Neural Networks.
[7] Yoshua Bengio,et al. An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks , 2013, ICLR.
[8] Maxim Raginsky,et al. Information-theoretic analysis of generalization capability of learning algorithms , 2017, NIPS.
[9] Linda B. Smith,et al. The importance of shape in early lexical learning , 1988 .
[10] G. Lewicki,et al. Approximation by Superpositions of a Sigmoidal Function , 2003 .
[11] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[12] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[13] Qiang Yang,et al. A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.
[14] Trevor Darrell,et al. Adapting Visual Category Models to New Domains , 2010, ECCV.
[15] Alexei A. Efros,et al. Unbiased look at dataset bias , 2011, CVPR 2011.
[16] Yoshua Bengio,et al. A Closer Look at Memorization in Deep Networks , 2017, ICML.
[17] Nathan Srebro,et al. Exploring Generalization in Deep Learning , 2017, NIPS.
[18] Radha Poovendran,et al. On the Limitation of Convolutional Neural Networks in Recognizing Negative Images , 2017, 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA).
[19] Y. Yao,et al. On Early Stopping in Gradient Descent Learning , 2007 .
[20] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[21] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[22] Koby Crammer,et al. A theory of learning from different domains , 2010, Machine Learning.
[23] Yoshua Bengio,et al. How transferable are features in deep neural networks? , 2014, NIPS.
[24] Trevor Darrell,et al. DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition , 2013, ICML.
[25] Rama Chellappa,et al. Domain adaptation for object recognition: An unsupervised approach , 2011, 2011 International Conference on Computer Vision.
[26] Samuel Ritter,et al. Cognitive Psychology for Deep Neural Networks: A Shape Bias Case Study , 2017, ICML.
[27] H. Shimodaira,et al. Improving predictive inference under covariate shift by weighting the log-likelihood function , 2000 .
[28] Max Tegmark,et al. Why Does Deep and Cheap Learning Work So Well? , 2016, Journal of Statistical Physics.
[29] Jorge Nocedal,et al. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima , 2016, ICLR.