Hide code cell source
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib_inline
matplotlib_inline.backend_inline.set_matplotlib_formats('svg')
import seaborn as sns
sns.set_context("paper")
sns.set_style("ticks")

# Uncomment the next two lines if running for the book
# import warnings
# warnings.filterwarnings("ignore")

import jax
import jax.numpy as jnp
import jax.random as jrandom
import gpjax as gpx

jax.config.update("jax_enable_x64", True)
key = jrandom.PRNGKey(0)

The Karhunen-Loève Expansion of a Gaussian Process#

If you want to know more about the Karhunen-Loève expansion, you can check the following references:

  • Betz, W., Papaioannou, I., & Straub, D. (2014). Numerical methods for the discretization of random fields by means of the Karhunen-Loeve expansion. Computer Methods in Applied Mechanics and Engineering, 271, 109-129. doi:10.1016/j.cma.2013.12.010

Karhunen-Loeve Expansion#

Consider a Gaussian process:

\[ f \sim \operatorname{GP}\left(m, k \right), \]

where \(m\) is the mean function and \(k\) is the covariance function. The Karhunen-Loève expansion (KLE) of \(f\) allows us to write it as:

\[ f(\mathbf{x}) = m(\mathbf{x}) + \sum_{i=1}^{\infty}\xi_i \sqrt{\lambda_i} \phi_i(\mathbf{x}), \]

where the random variables

\[ \xi_i \sim \mathcal{N}(0, 1) \]

are independent, and \(\lambda_i\) and \(\phi_i(\mathbf{x})\) are the eigenvalues and eigenvectors, respectively, of the covariance function, i.e.,

\[ \int k(\mathbf{x}, \mathbf{x}') \phi_i(\mathbf{x}')d\mathbf{x}' = \lambda_i \phi_i(\mathbf{x}'). \]

Since \(k(\cdot, \cdot)\) is actually positive definite, the eigenvalues are all positive and the eigenfunctions are orthogonal:

\[ \int \phi_i(\mathbf{x})\phi_j(\mathbf{x}')d\mathbf{x} = \delta_{ij}. \]

Truncated KLE#

Usually, we truncate the KLE to a finite order \(M\), i.e., we write

\[ \begin{equation} f(\mathbf{x}) \approx f_M(\mathbf{x}) = m(\mathbf{x}) + \sum_{i=1}^M \xi_i \sqrt{\lambda_i}\phi_i(\mathbf{x}). \end{equation} \]

But how do we pick \(M\) in practice?

In order to answer this question, notice that the variance of the field at the point \(\mathbf{x}\) is given by:

\[ \mathbb{V}[f(\mathbf{x})] = \sum_{i=1}^{\infty}\lambda_i\phi_i^2(\mathbf{x}) \]

The energy of the field, \(\mathcal{E}[f(\cdot)]\) is defined to be:

\[ \mathcal{E}[f] := \int\mathbb{V}[f(\mathbf{x})]d\mathbf{x} = \sum_{i=1}^\infty \lambda_i, \]

where we have used the orthonormality of the \(\phi_i\)’s. The energy of the field is a measure of the total variance of the field. The idea is to select \(M\) so that the energy of the truncated field \(f_M\) captures a percentage \(\alpha\) of the energy of the original field. That is, we pick \(M\) so that

\[ \mathcal{E}[f_M] = \alpha\mathcal{E}[f], \]

or

\[ \sum_{i=1}^M\lambda_i = \alpha \sum_{i=1}^\infty \lambda_i. \]

Typically, \(\alpha = 0.95\).

Why is this useful?#

The KLE allows us to reduce the dimensionality of random fields. This is extremely useful in uncertainty propagation and model calibration tasks. For example, in uncertainty propagation, by employing the KLE one has to deal with a finite set of Gaussian random variables \(\xi_i\) instead of an infinitely dimensional Gaussian random field.

class KarhunenLoeveExpansion(object):
    
    """
    A class representing the Karhunen Loeve Expansion of a Gaussian random field.
    It uses the Nystrom approximation to do it.
    
    Arguments:
        k      -     The covariance function.
        Xq     -     Quadrature points for the Nystrom approximation.
        wq     -     Quadrature weights for the Nystrom approximation.
        alpha  -     The percentage of the energy of the field that you want to keep.
        X      -     Observed inputs (optional).
        Y      -     Observed field values (optional).
    """
    
    def __init__(self, k, Xq=None, wq=None, nq=100, alpha=0.9, X=None, y=None, *, input_dim):

        self.k = k
        if input_dim is None:
            self.input_dim = 1
        else:
            self.input_dim = input_dim

        # Generate quadrature points
        if Xq is None:
            if input_dim is None:
                self.input_dim = 1
            if self.input_dim == 1:
                Xq = jnp.linspace(0, 1, nq)[:, None]
                wq = jnp.ones((nq, )) / nq
            elif self.input_dim == 2:
                nq = int(jnp.sqrt(nq))
                x = jnp.linspace(0, 1, nq)
                X1, X2 = jnp.meshgrid(x, x)
                Xq = jnp.hstack([X1.flatten()[:, None], X2.flatten()[:, None]])
                wq = jnp.ones((nq ** 2, )) / nq ** 2
            else:
                raise NotImplementedError('For more than 2D, please supply quadrature points and weights.')
        else:
            self.input_dim = Xq.shape[1]
        self.Xq = Xq
        self.wq = wq
        self.k = k
        self.alpha = alpha

        # Evaluate the covariance function at the quadrature points
        # If we have some observed data, we need to use the posterior covariance
        if X is not None:
            self.D = gpx.Dataset(X, y[:, None])
            prior = gpx.gps.Prior(mean_function=gpx.mean_functions.Zero(), kernel=k)
            likelihood = gpx.likelihoods.Gaussian(num_datapoints=X.shape[0])
            posterior = prior * likelihood
            posterior.likelihood.obs_stddev = gpx.parameters.Static(1e-6)
            self.posterior = posterior
            Kq = posterior.predict(Xq, train_data=self.D).covariance()
        else:
            self.D = None
            self.prior = gpx.gps.Prior(mean_function=gpx.mean_functions.Zero(), kernel=k)
            Kq = self.prior.predict(Xq).covariance()
        
        # Get the eigenvalues/eigenvectors of the discretized covariance function
        B = jnp.einsum('ij,j->ij', Kq, wq)
        lam, v = jax.scipy.linalg.eigh(B, overwrite_a=True)
        lam = lam[::-1]
        lam = lam.at[lam <= 0.].set(0.)

        # Keep only the eigenvalues that explain alpha% of the energy
        energy = jnp.cumsum(lam) / jnp.sum(lam)
        i_end = jnp.arange(energy.shape[0])[energy > alpha][0] + 1
        lam = lam[:i_end]
        v = v[:, ::-1]
        v = v[:, :i_end]
        
        self.lam = lam
        self.sqrt_lam = jnp.sqrt(lam)
        self.v = v
        self.energy = energy
        self.num_xi = i_end
        
    def eval_phi(self, x):
        """
        Evaluate the eigenfunctions at x.
        """
        if self.D is not None:
            nq = self.Xq.shape[0]
            Xf = jnp.vstack([self.Xq, x])
            latent_dist = self.posterior.predict(Xf, train_data=self.D)
            m = latent_dist.mean()
            C = latent_dist.covariance()
            Kc = C[:nq, nq:].T
            self.tmp_mu = m[nq:]
        else:
            Kc = self.prior.kernel.cross_covariance(x, self.Xq)
            self.tmp_mu = 0.
        phi = jnp.einsum("i,ji,j,rj->ri", 1. / self.lam, self.v, self.wq**0.5, Kc)
        return phi
    
    def __call__(self, x, xi):
        """
        Evaluate the expansion at x and xi.
        """
        phi = self.eval_phi(x)
        return self.tmp_mu + jnp.dot(phi, xi * self.sqrt_lam)

Let’s just plot the eigenfunctions/values of the square exponential (SE) covariance function:

k = gpx.kernels.RBF(lengthscale=0.1, n_dims=1)
kle = KarhunenLoeveExpansion(k, nq=5, alpha=.9, input_dim=1)
x = jnp.linspace(0, 1, 100)[:, None]

fig, ax = plt.subplots()
ax.plot(x, kle.eval_phi(x))
ax.set_xlabel('$x$')
ax.set_ylabel('$\phi_i(x)$')
sns.despine(trim=True);

fig, ax = plt.subplots()
ax.plot(kle.lam)
ax.set_xlabel('$i$')
ax.set_ylabel('$\lambda_i$')
sns.despine(trim=True);
../../_images/534f06f2c64577d016a3ab51c974137c693a21f05fd2050eee7c086e40ca6b75.svg ../../_images/9b1394c9746f47907a472a470b8cd9dfaf4053208dde7104d67cbee75a19f871.svg

Questions#

  1. The estimated eigenfunctions and eigenvalues do not look very accurate. Perhaps, you need to increase the number of quadrature points used in the Nystrom approximation. Try nq=20. How do they look now?

  2. How are the eigenvalues of the covariance function affected if you decrease the lengthscale?

  3. The default variance of the square exponential is one. Try changing it to 2. What changed, if anything?

  4. Experiment with different covariance functions, e.g., the PoweredExponential or the Matern32.

Varying the lengthscale#

Let’s vary the lengthscale of the SE and see what happens to the eigenvalues.

x = jnp.linspace(0, 1, 100)[:, None]
fig, ax = plt.subplots()
for ell in [0.01, 0.05, 0.1, 0.2, 0.5]:
    k = gpx.kernels.RBF(lengthscale=ell)
    kle = KarhunenLoeveExpansion(k, nq=100, alpha=.9, input_dim=1)
    ax.plot(kle.lam[:5], '-x', markersize=5, markeredgewidth=2, label='$\ell={0:1.2f}$'.format(ell))
ax.legend(loc='best')
ax.set_xlabel('$i$')
ax.set_ylabel('$\lambda_i$')
sns.despine(trim=True);
../../_images/f48f53734c52defcc91215a22962c8479b152f4473d6cfdf7741f7b272573b05.svg

Sampling from the random field using \(\xi\)#

k = gpx.kernels.PoweredExponential(lengthscale=0.1)
kle = KarhunenLoeveExpansion(k, nq=100, alpha=0.8, input_dim=1)
x = jnp.linspace(0, 1, 100)[:, None]
fig, ax = plt.subplots()
for i in range(3):
    key, key_xi = jrandom.split(key)
    xi = jrandom.normal(key_xi, (kle.num_xi,))
    f = kle(x, xi)
    ax.plot(x, f, color=sns.color_palette()[0])
sns.despine(trim=True);
../../_images/b1658fcb7306543122331c57c732b80c10f5f0b2555e2da51194a17948197e28.svg

Questions#

  1. Above we show the samples that we get from the KLT using an exponential covariance function. They look too smooth. The samples are supposed to be no-where differentiable. What is the problem?

  2. How many terms did you need to get samples that really look like samples from an exponential GP?

KLE for GP with Observed Data#

Here we take a look at the KLE of a GP where we have made some input/output observations

# Just generate some input/output pairs randomly...
key, key_X, key_y = jrandom.split(key, 3)
X = jrandom.uniform(key_X, shape=(3, 1))
y = jrandom.normal(key_y, shape=(3,))
# X and Y are assumed to be observed

k = gpx.kernels.RBF(lengthscale=0.1)
kle = KarhunenLoeveExpansion(k, nq=100, alpha=0.9, X=X, y=y, input_dim=1)

x = jnp.linspace(0, 1, 100)[:, None]
fig, ax = plt.subplots()
ax.plot(x, kle.eval_phi(x))
ax.set_xlabel('$x$')
ax.set_ylabel('$\phi_i(x)$')
sns.despine(trim=True)

fig, ax = plt.subplots()
ax.plot(X, y, 'kx', markeredgewidth=2)
for i in range(3):
    key, key_xi = jrandom.split(key)
    xi = jrandom.normal(key_xi, (kle.num_xi,))
    f = kle(x, xi)
    ax.plot(x, f, color=sns.color_palette()[0])
sns.despine(trim=True);
../../_images/7e4906d6bf05d28697d7577418c850719a53c59e0418b056701ebb53fc352e69.svg ../../_images/f50a0223b0d444d782ca167c9537398c6360152930000f297118b49b39aa345f.svg

Questions#

  1. What is the value of the basis functions at the points where we have observations?

  2. Experiment with various covariance functions and hyper-parameters.

Playing in two-dimensions#

Let’s experiment with these ideas in two dimensions.

# Uncomment to see the eigenfunctions when there are observations.

# key, key_X, key_y = jrandom.split(key, 3)  # Uncomment me!
# X = jrandom.uniform(key_X, shape=(10, 2))  # Uncomment me!
# y = jrandom.normal(key_y, shape=(10,))  # Uncomment me!
k = gpx.kernels.RBF(lengthscale=0.1, n_dims=2)
kle = KarhunenLoeveExpansion(k, nq=100, alpha=0.9, input_dim=2)#, X=X, y=y)  # Uncomment me!
x = jnp.linspace(0, 1, 32)
X1, X2 = jnp.meshgrid(x, x)
X_all = jnp.hstack([X1.flatten()[:, None], X2.flatten()[:, None]])
print('Number of terms:', kle.num_xi)

# Let's look at them
Phi = kle.eval_phi(X_all)
for i in range(5):
    fig, ax = plt.subplots()
    c = ax.contourf(X1, X2, Phi[:, i].reshape(X1.shape))
    # ax.plot(X[:, 0], X[:, 1], 'bx', markeredgewidth=2)  # Uncomment me!
    fig.colorbar(c)
    sns.despine(trim=True);
Number of terms: 49
../../_images/45e8e6bea16f5c509bf895b01cb2ba755d3dc617104e0c4bcc392f08e9819452.svg ../../_images/338cf8e88bd01cc059e4f9c522f20d069edd291b3c7005a7d0452a3b85fccc56.svg ../../_images/f2ff995febe8950b37b936f41df0176592b85b9e3d01ab0655d8fa9cc5792f3c.svg ../../_images/c5ddd165416c94a0a1b10e22af2a5215fb0a646a846630ac39b26dc6176637ab.svg ../../_images/54f8891aa5bc17030fb9cbacedd25d03829e3f481663290e0ded563473abaae6.svg

Questions#

  1. Try plotting some eigenfunctions with higher index.

  2. Try adding some observations.

  3. Now, that you are getting familar, try to plot a few samples from this random field.