Extensive deep neural networks

We present a procedure for training and evaluating a deep neural network which can efficiently infer extensive parameters of arbitrarily large systems, doing so with O(N) complexity. We use a form of domain decomposition for training and inference, where each sub-domain (tile) is comprised of a non-overlapping focus region surrounded by an overlapping context region. The relative sizes of focus and context are physically motivated and depend on the locality length scale of the problem. Extensive deep neural networks (EDNN) are a formulation of convolutional neural networks which provide a flexible and general approach, based on physical constraints, to describe multi-scale interactions. They are well suited to massively parallel inference, as no inter-thread communication is necessary during evaluation. Example uses for learning simple spin models, Laplacian (derivative) operator, and approximating many-body quantum mechanical operators (within the density functional theory approach) are demonstrated.