```
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib_inline
matplotlib_inline.backend_inline.set_matplotlib_formats('png')
import seaborn as sns
sns.set_context("paper")
sns.set_style("ticks");
```

# Homework 4#

## References#

Lectures 15 through 19 (inclusive).

## Instructions#

Type your name and email in the “Student details” section below.

Develop the code and generate the figures you need to solve the problems using this notebook.

For the answers that require a mathematical proof or derivation you should type them using latex. If you have never written latex before and you find it exceedingly difficult, we will likely accept handwritten solutions.

The total homework points are 100. Please note that the problems are not weighed equally.

## Student details#

**First Name:****Last Name:****Email:****Used generative AI to complete this assignment (Yes/No):****Which generative AI tool did you use (if applicable)?:**

# Problem 1 - Theory of Proper Orthogonal Decomposition (POD)#

Let \(\Omega\) be a subset of \(\mathbb{R}^d\) and let \(L^2(\Omega)\) be the space of square-integrable functions on \(\Omega\). The inner product in \(L^2(\Omega)\) is defined by:

Suppose we have a dataset of \(N\) functions \(\{u_i\}_{i=1}^N \subset L^2(\Omega)\). Define the empirical mean of any operator or functional \(F[u]\) by:

Define the operator \(R: L^2(\Omega) \to L^2(\Omega)\) by:

Recall that the POD modes are the eigenfunctions of the operator \(R\):

In this problem, I am asking you to prove some properties of \(R\) and the POD modes using just the following properties:

The inner produce \(u^\dagger v = \langle u, v \rangle\) satisfies the properties of a standard inner product.

The empirical mean \(\langle F[u] \rangle\) satisfies the basic properties of the expectation value of a random variable.

You can interchange the order of the empirical mean and the inner product, i.e., \(\langle F[u] \rangle = F[\langle u \rangle]\) for any linear operator \(F\).

## Part A#

Show that the operator \(R\) has the kernel representation:

where the kernel \(R(x,x')\) is given by:

*Hint: Use the definition of \(R\) and the properties of the inner product and the empirical mean.*

**Answer:**

## Part B#

Show that the operator \(R\) is linear, i.e.,

**Answer:**

## Part C#

Show that the operator \(R\) is bounded, in the sense that

for some constant \(C\) that does not depend on \(\phi\).

*Hint: Use the triangle inequality for norms and the Cauchy-Schwarz inequality.*

**Answer:**

## Part D#

Show that \(R\) is continuous.

*Hint: It sufficies to show that if \(\|\phi_n - \phi\| \to 0\), then \(\|R\phi_n - R\phi\|\to 0\). Use the fact that \(R\) is bounded.*

**Answer:**

## Part E#

The adjoint of an operator \(A\) is the operator \(A^\dagger\) such that

It is easier to understand the adjoint using the dagger notation:

That is, you can think of the adjoint as the transpose of a real matrix (or the conjugate transpose of a complex matrix). Show that \(R\) is self-adjoint when \(R^\dagger = R\), i.e.,

or equivalently in terms of the dagger notation:

So, you can think of the self-adjoint property as the analog of a real matrix being symmetric or a complex matrix being Hermitian.

*Hint: Just start with the left hand side, use the definition of \(R\), the properties of the inner product and the empirical mean, and the fact that the inner product is a scalar.*

**Answer:**

## Part F#

Show that \(R\) is positive semi-definite, i.e.,

or equivalently in terms of the dagger notation:

**Answer:**

## Part G#

\(R\) is also compact, but it is harder to show this property. In any case, because \(R\) is linear, bounded, self-adjoint, compact, and positive semi-definite, it has a complete set of orthonormal eigenfunctions \(\{\phi_k\}_{k=1}^\infty\) and corresponding non-negative eigenvalues \(\{\lambda_k\}_{k=1}^\infty\).

Show that we can represent \(R\) using the spectral decomposition:

You can think of \(\phi_k \phi_k^\dagger\) as the analog of a rank-one matrix. It is defined in terms of its application to a function \(\psi\) as:

since \(\phi_k^\dagger\psi\) is a scalar.

*Hint: Take an arbitrary function \(\phi\) and expand it in the basis of eigenfunctions \(\{\phi_k\}_{k=1}^\infty\). Then show that applying \(R\) to it gives the same result as applying the right-hand-side of the above expression.*

**Answer:**

## Part H#

Show that the spectral decomposition of the kernel \(R(x,x')\) is:

**Answer:**

## Part I#

Suppose \(u\) is one component of the velocity field of a fluid flow of constant density \(\rho\). The kinetic energy stored in this component of the fluid velocity is:

Show that the empirical mean of the kinetic energy is:

**Answer:**

## Problem 2 - POD of fluid flow#

In this problem, we will apply POD on fluid flow. We will use the 2D Navier-Stokes solution data set from Wang & Perdikaris (https://arxiv.org/abs/2203.07404). The data contain the magnitude of a 2D fluid flow velocity field at different times. You can see a video of it here:

You can download the data using the following command:

```
!curl -O https://raw.githubusercontent.com/PredictiveIntelligenceLab/CausalPINNs/main/data/NS.npy
```

```
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
```

```
100 12.6M 100 12.6M 0 0 16.7M 0 --:--:-- --:--:-- --:--:-- 16.7M
```

You can load the data like this:

```
data = np.load('NS.npy', allow_pickle=True)
```

And you should work with the velocity magnitude:

```
vel_mag = data.item()['sol']
vel_mag.shape
```

```
(200, 128, 128)
```

You will also need the times at which the data were collected:

```
times = data.item()['t']
```

And the spatial grid:

```
x = data.item()['x']
y = data.item()['y']
X1, X2 = np.meshgrid(x, y)
```

### Part A - Singular value decomposition#

Arrange all your data in a big matrix \(\mathbf{X}\) such that each row is a snapshot of the velocity field. That is, each row should be a flattened version of the velocity magnitude at a given time. Then, compute the economy version of the singular value decomposition (SVD) of \(\mathbf{X}\):

*Hint: Use the function scipy.linalg.svd.*

**Answer:**

```
# your code
```

### Part B - Plot the percentage of energy captured by the POD modes#

Plot the percentage of energy captured by the first \(K\) POD modes, i.e., the percentage of energy captured by the first \(K\) singular values, as a function of \(K\). That is, plot:

Use your plot to find the number of POD modes that capture 90% of the energy.

**Answer:**

```
# Your code here
```

### Part C - Visualize the first two POD modes#

Visualize the first two POD modes. Recall, the POD modes are captured by the columns of the matrix \(\mathbf{V}\). So, you just need to take each of the first two columns of \(\mathbf{V}\) and reshape them to the original spatial grid and do a contour plot.

**Answer:**

```
# Your code here
```

### Part D - Principal components or modal coefficients#

The principal components or modal coefficients are the projections of the data onto the POD modes. Using the SVD, we have that the projections are:

Find that matrix and keep only the columns that correspond to 90% of the energy.

**Answer:**

```
# your code here
```

### Part F - Plot the time evolution of the first two modal coefficients#

Plot the time evolution of the first two modal coefficients.

**Answer:**

```
# Your code here
```

### Part G - Reconstruction of the velocity magnitude#

Reconstruct the velocity magnitude at times 0, 1, ans 2, using the first \(K\) POD modes (90% of the energy). Compare your reconstructions with the original data.

**Answer:**

```
# Your code here
```