暂无分享,去创建一个
The work "Loss Landscape Sightseeing with Multi-Point Optimization" (Skorokhodov and Burtsev, 2019) demonstrated that one can empirically find arbitrary 2D binary patterns inside loss surfaces of popular neural networks. In this paper we prove that: (i) this is a general property of deep universal approximators; and (ii) this property holds for arbitrary smooth patterns, for other dimensionalities, for every dataset, and any neural network that is sufficiently deep and wide. Our analysis predicts not only the existence of all such low-dimensional patterns, but also two other properties that were observed empirically: (i) that it is easy to find these patterns; and (ii) that they transfer to other data-sets (e.g. a test-set).
[1] M. Burtsev,et al. Loss Surface Sightseeing by Multi-Point Optimization , 2019 .
[2] Hao Li,et al. Visualizing the Loss Landscape of Neural Nets , 2017, NeurIPS.
[3] Stanislav Fort,et al. Large Scale Structure of Neural Network Loss Landscapes , 2019, NeurIPS.
[4] Yann LeCun,et al. Open Problem: The landscape of the loss surfaces of multilayer networks , 2015, COLT.
[5] Kurt Hornik,et al. Approximation capabilities of multilayer feedforward networks , 1991, Neural Networks.