Physics-informed Deep Neural Networks#

Physics-informed neural networks, or PINNs, combine physics and data. The idea is to use physics, typically an ordinary or partial differential equation, as a physics-informed regularizer to the loss function. The most common way to create such a regularizer is via the so-called integrated squared residual of the ODE/PDE. You can do this by moving everything to the left-hand side of the equation and squaring it, then integrating it over the domain. We will see examples in the hands-on activities. The loss becomes:

\[ \text{Loss} = \text{Data loss} + \text{Physics regularizer}. \]

Such approaches have been shown to be very effective in many applications, including fluid dynamics, solid mechanics, and quantum mechanics. The main advantage is that you can use a small amount of data to train a neural network and then use the neural network to make predictions in regions where you have no data. This is particularly useful in scientific applications, where data is often scarce.

One of the first papers to introduce physics-informed neural networks, albeit for just solving ODEs/PDEs, was [Lagaris et al., 1998]. The approach was revitalized by [] and became what we now call PINNs. [] applied PINNs to parametric PDEs. [Yang et al., 2021] introduced Bayesian PINNs.