暂无分享,去创建一个
[1] David Tse,et al. Porcupine Neural Networks: (Almost) All Local Optima are Global , 2017, ArXiv.
[2] Thomas Laurent,et al. Deep linear neural networks with arbitrary loss: All local minima are global , 2017, ArXiv.
[3] Ohad Shamir,et al. Spurious Local Minima are Common in Two-Layer ReLU Neural Networks , 2017, ICML.
[4] Joan Bruna,et al. Topology and Geometry of Half-Rectified Network Optimization , 2016, ICLR.
[5] X H Yu,et al. On the local minima free condition of backpropagation learning , 1995, IEEE Trans. Neural Networks.
[6] Suvrit Sra,et al. Global optimality conditions for deep neural networks , 2017, ICLR.
[7] Yuandong Tian,et al. Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima , 2017, ICML.
[8] Haihao Lu,et al. Depth Creates No Bad Local Minima , 2017, ArXiv.
[9] Harold R. Parks,et al. A Primer of Real Analytic Functions , 1992 .
[10] Yi Zhou,et al. Critical Points of Neural Networks: Analytical Forms and Landscape Properties , 2017, ArXiv.
[11] René Vidal,et al. Global Optimality in Neural Network Training , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Le Song,et al. Diverse Neural Network Learns True Target Functions , 2016, AISTATS.
[13] Yann LeCun,et al. The Loss Surfaces of Multilayer Networks , 2014, AISTATS.
[14] Sepp Hochreiter,et al. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) , 2015, ICLR.
[15] Kenji Kawaguchi,et al. Deep Learning without Poor Local Minima , 2016, NIPS.
[16] Matthias Hein,et al. The Loss Surface of Deep and Wide Neural Networks , 2017, ICML.
[17] Thomas Laurent,et al. The Multilinear Structure of ReLU Networks , 2017, ICML.
[18] Kurt Hornik,et al. Neural networks and principal component analysis: Learning from examples without local minima , 1989, Neural Networks.
[19] Daniel Soudry,et al. No bad local minima: Data independent training error guarantees for multilayer neural networks , 2016, ArXiv.