```
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib_inline
matplotlib_inline.backend_inline.set_matplotlib_formats('svg')
import seaborn as sns
sns.set_context("paper")
sns.set_style("ticks");
```

# Proper Orthogonal Decomposition#

Proper orthogonal decomposition (POD) is essentially PCA in Hilbert spaces. Let \(H\) be such a Hilbert space with inner product \(\langle \cdot, \cdot \rangle\) and norm \(\|\cdot\|\).

Let \(\{u_i\}_{i=1}^n\) be a set of \(n\) functions in \(H\). These are our observations. For example, \(u_i\) could be flow fields (velocity and pressure) at different times, or the temperature field in a heat transfer problem.

POD finds an optimal set of orthonormal basis functions \(\{\phi_i\}_{i=1}^m\). The set is optimal in the sense that the error in approximating the observations by a linear combination of the basis functions is minimized.

We will develop the idea using one basis function \(\phi\). The projection of a function \(u\) onto \(\phi\) is given by:

The error in the projection is given by:

Now, define the empirical expectation operator:

Please try not to confuse the empirical expectation operator with the inner product. The empirical expectation operator satisfies the same properties as the expectation operator in probability theory.

Now, we can write down the expected reconstruction error:

It is straightforward to show that minimizing \(J(\phi)\) is equivalent to maximizing the magnitude of the projections:

Using the method of Lagrange multipliers, we write down:

The first variation of \(L(\phi)\) in an arbitrary direction \(\eta\) must be zero:

So:

or

We see that we get an operator eigenvalue problem. The operator is \(R:H\to H\) defined by:

This operator is linear, self-adjoint, and compact. So, if we are in a separable Hilbert space, we can find an orthonormal basis of eigenfunctions \(\{\phi_i\}_{i=1}^\infty\) and eigenvalues \(\{\lambda_i\}_{i=1}^\infty\). The eigenvalues are non-negative and the eigenfunctions are ordered by decreasing eigenvalues. The biggest eigenvalue corresponds to the best basis function, the second biggest eigenvalue corresponds to the second best basis function, and so on.

## Spectral decomposition of the \(R\) operator#

The \(R\) operator, can be decomposed into a sum of eigenfunctions and eigenvalues:

This is called the *spectral decomposition* of the operator.

## The kernel of the \(R\) operator#

Suppose we work in \(H = L^2(\Omega)\), where \(\Omega\) is a domain in \(\mathbb{R}^d\). Then, the inner product is:

The operator \(R\) can be expressed in terms of a kernel \(R: \Omega \times \Omega \to \mathbb{R}\):

Notice that the kernel involves an empirical average. The value \(R(x,y)\) is the empirical correlation between \(u(x)\) and \(u(y)\). To see where the kernel comes from, observe that:

The kernel \(R(x,y)\) is symmetric, postive-semi-definite.
It also has a *spectral decomposition*:

## Computing the POD using SVD#

To compute the POD, we typically work with a discretized version of the physical fields. For example, if \(H = L^2(\Omega)\), we can discretize the domain \(\Omega\) into a grid of \(N\) points:

Then, we approximate the inner product with a sum:

The kernel \(R\), now becomes a \(N\times N\) matrix \(\mathbf{R}\) with elements:

If we make the data matrix:

we observe that the kernel is:

We can do SVD on \(\mathbf{X}\):

and substitute in \(\mathbf{R}\) to find:

From this, we see that the eigenvectors of \(\mathbf{R}\) are the right singular vectors of \(\mathbf{X}\), and the eigenvalues are the squares of the singular values of \(\mathbf{X}\) divided by \(n\):

and

The projections (known as POD modal coefficients) are given by:

Note that for other Hilbert spaces, the procedure is similar - but not identical.