CMX Lunch Seminar
Deep Galerkin Methods (DGM) and physics-informed neural networks (PINNs) directly solve partial differential equations (PDEs) with neural networks. For linear elliptic PDEs, we prove that DGM/PINNs (despite the non-convexity of neural networks) trained with gradient descent globally converge to the PDE solution as the number of training steps and hidden units go to infinity. A key technical challenge is the lack of a spectral gap for the training dynamics of the neural network. A related application of interest in applied mathematics and engineering is using deep learning to model unknown terms within a PDE, such as closure models in large-eddy simulation (LES) and Reynolds-Averaged Navier-Stokes (RANS). The neural network terms in the PDE are optimized using adjoint PDEs, which again requires minimizing a highly non-convex objective function. Similar to the result for DGM/PINNs, we are able to prove (for semi-linear parabolic equations) that the trained neural network-PDE converges to a global minimizer. Numerical results for LES and RANS with adjoint-optimized neural network closure models will be presented for several canonical examples in fluid dynamics.
