Variational Physics-Informed Neural Networks For Solving Partial Differential Equations

Physics-informed neural networks (PINNs) [31] use automatic differentiation to solve partial differential equations (PDEs) by penalizing the PDE in the loss function at a random set of points in the domain of interest. Here, we develop a Petrov-Galerkin version of PINNs based on the nonlinear approximation of deep neural networks (DNNs) by selecting the {\em trial space} to be the space of neural networks and the {\em test space} to be the space of Legendre polynomials. We formulate the \textit{variational residual} of the PDE using the DNN approximation by incorporating the variational form of the problem into the loss function of the network and construct a \textit{variational physics-informed neural network} (VPINN). By integrating by parts the integrand in the variational form, we lower the order of the differential operators represented by the neural networks, hence effectively reducing the training cost in VPINNs while increasing their accuracy compared to PINNs that essentially employ delta test functions. For shallow networks with one hidden layer, we analytically obtain explicit forms of the \textit{variational residual}. We demonstrate the performance of the new formulation for several examples that show clear advantages of VPINNs over PINNs in terms of both accuracy and speed.

[1]  Allen Y. Yang,et al.  Nonlinear basis pursuit , 2013, 2013 Asilomar Conference on Signals, Systems and Computers.

[2]  Lei Wu,et al.  Barron Spaces and the Compositional Function Spaces for Neural Network Models , 2019, ArXiv.

[3]  Richa Singh,et al.  Greedy Deep Dictionary Learning , 2016, ArXiv.

[4]  Jan S. Hesthaven,et al.  Spectral penalty methods , 2000 .

[5]  F. Cao,et al.  The rate of approximation of Gaussian radial basis neural networks in continuous function space , 2013 .

[6]  H. N. Mhaskar,et al.  Function approximation by deep networks , 2019, ArXiv.

[7]  R. DeVore,et al.  Nonlinear approximation , 1998, Acta Numerica.

[8]  Amos Ron,et al.  Approximation using scattered shifts of a multivariate function , 2008, 0802.2517.

[9]  George Em Karniadakis,et al.  fPINNs: Fractional Physics-Informed Neural Networks , 2018, SIAM J. Sci. Comput..

[10]  Yoshua Bengio,et al.  Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.

[11]  Amos Ron,et al.  Nonlinear approximation using Gaussian kernels , 2009, 0911.2803.

[12]  George Em Karniadakis,et al.  Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness , 2019, Neural Networks.

[13]  T. Poggio,et al.  Deep vs. shallow networks : An approximation theory perspective , 2016, ArXiv.

[14]  Taiji Suzuki,et al.  Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality , 2018, ICLR.

[15]  Xia Liu,et al.  Almost optimal estimates for approximation and learning by radial basis function networks , 2013, Machine Learning.

[16]  George E. Karniadakis,et al.  Hidden Fluid Mechanics: A Navier-Stokes Informed Deep Learning Framework for Assimilating Flow Visualization Data , 2018, ArXiv.

[17]  Paris Perdikaris,et al.  Machine learning of linear differential equations using Gaussian processes , 2017, J. Comput. Phys..

[18]  George Em Karniadakis,et al.  Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks , 2020, Proceedings of the Royal Society A.

[19]  Paris Perdikaris,et al.  Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations , 2019, J. Comput. Phys..

[20]  Michael S. Triantafyllou,et al.  Deep learning of vortex-induced vibrations , 2018, Journal of Fluid Mechanics.

[21]  Zhiping Mao,et al.  A Spectral Penalty Method for Two-Sided Fractional Differential Equations with General Boundary Conditions , 2018, SIAM J. Sci. Comput..

[22]  G. Karniadakis,et al.  Spectral/hp Element Methods for Computational Fluid Dynamics , 2005 .

[23]  J. Burgers A mathematical model illustrating the theory of turbulence , 1948 .

[24]  I. Daubechies Ten Lectures on Wavelets , 1992 .

[25]  L. Chambers Linear and Nonlinear Waves , 2000, The Mathematical Gazette.

[26]  Raman Arora,et al.  Understanding Deep Neural Networks with Rectified Linear Units , 2016, Electron. Colloquium Comput. Complex..

[27]  George E. Karniadakis,et al.  Hidden physics models: Machine learning of nonlinear partial differential equations , 2017, J. Comput. Phys..

[28]  David Gottlieb,et al.  The Chebyshev-Legendre method: implementing Legendre methods on Chebyshev points , 1994 .

[29]  Zhifeng Zhang,et al.  Adaptive Nonlinear Approximations , 1994 .

[30]  G. Lewicki,et al.  Approximation by Superpositions of a Sigmoidal Function , 2003 .

[31]  G. Petrova,et al.  Nonlinear Approximation and (Deep) ReLU\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm {ReLU}$$\end{document} , 2019, Constructive Approximation.

[32]  H. Bateman,et al.  SOME RECENT RESEARCHES ON THE MOTION OF FLUIDS , 1915 .

[33]  Michael B. Wakin,et al.  An Introduction To Compressive Sampling [A sensing/sampling paradigm that goes against the common knowledge in data acquisition] , 2008 .

[34]  Shijun Zhang,et al.  Nonlinear Approximation via Compositions , 2019, Neural Networks.

[35]  George Em Karniadakis,et al.  Adaptive activation functions accelerate convergence in deep and physics-informed neural networks , 2019, J. Comput. Phys..

[36]  C. Micchelli,et al.  Approximation by superposition of sigmoidal and radial basis functions , 1992 .