Bounding and Counting Linear Regions of Deep Neural Networks
暂无分享,去创建一个
Christian Tjandraatmadja | Srikumar Ramalingam | Thiago Serra | S. Ramalingam | Thiago Serra | Christian Tjandraatmadja | Srikumar Ramalingam
[1] Robert G. Jeroslow,et al. Representability in mixed integer programmiing, I: Characterization results , 1987, Discret. Appl. Math..
[2] Razvan Pascanu,et al. On the number of response regions of deep feed forward networks with piece-wise linear activations , 2013, 1312.6098.
[3] Razvan Pascanu,et al. On the Number of Linear Regions of Deep Neural Networks , 2014, NIPS.
[4] Raman Arora,et al. Understanding Deep Neural Networks with Rectified Linear Units , 2016, Electron. Colloquium Comput. Complex..
[5] T. Zaslavsky. Facing Up to Arrangements: Face-Count Formulas for Partitions of Space by Hyperplanes , 1975 .
[6] George Cybenko,et al. Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..
[7] Jeffrey D. Camm,et al. Cutting Big M Down to Size , 1990 .
[8] Simon Haykin,et al. GradientBased Learning Applied to Document Recognition , 2001 .
[9] Franco Scarselli,et al. On the Complexity of Neural Network Classifiers: A Comparison Between Shallow and Deep Architectures , 2014, IEEE Transactions on Neural Networks and Learning Systems.
[10] Zonghao Gu,et al. Generating Multiple Solutions for Mixed Integer Programming Problems , 2007, IPCO.
[11] Egon Balas,et al. A lift-and-project cutting plane algorithm for mixed 0–1 programs , 1993, Math. Program..
[12] Tomaso A. Poggio,et al. Learning Real and Boolean Functions: When Is Deep Better Than Shallow , 2016, ArXiv.
[13] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Chih-Hong Cheng,et al. Maximum Resilience of Artificial Neural Networks , 2017, ATVA.
[15] Peter L. Bartlett,et al. Neural Network Learning - Theoretical Foundations , 1999 .
[16] Yoshua Bengio,et al. Shallow vs. Deep Sum-Product Networks , 2011, NIPS.
[17] Vivek Srikumar,et al. Expressiveness of Rectifier Networks , 2015, ICML.
[18] Ohad Shamir,et al. The Power of Depth for Feedforward Neural Networks , 2015, COLT.
[19] Yoshua Bengio,et al. Maxout Networks , 2013, ICML.
[20] Eduardo Sontag,et al. A Comparison of the Computational Power of Sigmoid and Boolean Threshold Circuits , 1994 .
[21] Jürgen Schmidhuber,et al. Multi-column deep neural network for traffic sign classification , 2012, Neural Networks.
[22] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[23] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[24] Tara N. Sainath,et al. Deep Neural Networks for Acoustic Modeling in Speech Recognition , 2012 .
[25] Hanif D. Sherali,et al. Disjunctive Programming , 2009, Encyclopedia of Optimization.
[26] Surya Ganguli,et al. On the Expressive Power of Deep Neural Networks , 2016, ICML.
[27] Razvan Pascanu,et al. On the number of inference regions of deep feed forward networks with piece-wise linear activations , 2013, ICLR.
[28] Matus Telgarsky,et al. Representation Benefits of Deep Feedforward Networks , 2015, ArXiv.
[29] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).