Show code cell source
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib_inline
matplotlib_inline.backend_inline.set_matplotlib_formats('svg')
import seaborn as sns
sns.set_context("paper")
sns.set_style("ticks")
# Uncomment the next two lines if running for the book
# import warnings
# warnings.filterwarnings("ignore")
import jax
import jax.numpy as jnp
import jax.random as jrandom
import gpjax as gpx
jax.config.update("jax_enable_x64", True)
key = jrandom.PRNGKey(0)
The Karhunen-Loève Expansion of a Gaussian Process#
If you want to know more about the Karhunen-Loève expansion, you can check the following references:
Betz, W., Papaioannou, I., & Straub, D. (2014). Numerical methods for the discretization of random fields by means of the Karhunen-Loeve expansion. Computer Methods in Applied Mechanics and Engineering, 271, 109-129. doi:10.1016/j.cma.2013.12.010
Karhunen-Loeve Expansion#
Consider a Gaussian process:
where \(m\) is the mean function and \(k\) is the covariance function. The Karhunen-Loève expansion (KLE) of \(f\) allows us to write it as:
where the random variables
are independent, and \(\lambda_i\) and \(\phi_i(\mathbf{x})\) are the eigenvalues and eigenvectors, respectively, of the covariance function, i.e.,
Since \(k(\cdot, \cdot)\) is actually positive definite, the eigenvalues are all positive and the eigenfunctions are orthogonal:
Truncated KLE#
Usually, we truncate the KLE to a finite order \(M\), i.e., we write
But how do we pick \(M\) in practice?
In order to answer this question, notice that the variance of the field at the point \(\mathbf{x}\) is given by:
The energy of the field, \(\mathcal{E}[f(\cdot)]\) is defined to be:
where we have used the orthonormality of the \(\phi_i\)’s. The energy of the field is a measure of the total variance of the field. The idea is to select \(M\) so that the energy of the truncated field \(f_M\) captures a percentage \(\alpha\) of the energy of the original field. That is, we pick \(M\) so that
or
Typically, \(\alpha = 0.95\).
Why is this useful?#
The KLE allows us to reduce the dimensionality of random fields. This is extremely useful in uncertainty propagation and model calibration tasks. For example, in uncertainty propagation, by employing the KLE one has to deal with a finite set of Gaussian random variables \(\xi_i\) instead of an infinitely dimensional Gaussian random field.
class KarhunenLoeveExpansion(object):
"""
A class representing the Karhunen Loeve Expansion of a Gaussian random field.
It uses the Nystrom approximation to do it.
Arguments:
k - The covariance function.
Xq - Quadrature points for the Nystrom approximation.
wq - Quadrature weights for the Nystrom approximation.
alpha - The percentage of the energy of the field that you want to keep.
X - Observed inputs (optional).
Y - Observed field values (optional).
"""
def __init__(self, k, Xq=None, wq=None, nq=100, alpha=0.9, X=None, y=None, *, input_dim):
self.k = k
if input_dim is None:
self.input_dim = 1
else:
self.input_dim = input_dim
# Generate quadrature points
if Xq is None:
if input_dim is None:
self.input_dim = 1
if self.input_dim == 1:
Xq = jnp.linspace(0, 1, nq)[:, None]
wq = jnp.ones((nq, )) / nq
elif self.input_dim == 2:
nq = int(jnp.sqrt(nq))
x = jnp.linspace(0, 1, nq)
X1, X2 = jnp.meshgrid(x, x)
Xq = jnp.hstack([X1.flatten()[:, None], X2.flatten()[:, None]])
wq = jnp.ones((nq ** 2, )) / nq ** 2
else:
raise NotImplementedError('For more than 2D, please supply quadrature points and weights.')
else:
self.input_dim = Xq.shape[1]
self.Xq = Xq
self.wq = wq
self.k = k
self.alpha = alpha
# Evaluate the covariance function at the quadrature points
# If we have some observed data, we need to use the posterior covariance
if X is not None:
self.D = gpx.Dataset(X, y[:, None])
prior = gpx.gps.Prior(mean_function=gpx.mean_functions.Zero(), kernel=k)
likelihood = gpx.likelihoods.Gaussian(num_datapoints=X.shape[0])
posterior = prior * likelihood
posterior.likelihood.obs_stddev = gpx.parameters.Static(1e-6)
self.posterior = posterior
Kq = posterior.predict(Xq, train_data=self.D).covariance()
else:
self.D = None
self.prior = gpx.gps.Prior(mean_function=gpx.mean_functions.Zero(), kernel=k)
Kq = self.prior.predict(Xq).covariance()
# Get the eigenvalues/eigenvectors of the discretized covariance function
B = jnp.einsum('ij,j->ij', Kq, wq)
lam, v = jax.scipy.linalg.eigh(B, overwrite_a=True)
lam = lam[::-1]
lam = lam.at[lam <= 0.].set(0.)
# Keep only the eigenvalues that explain alpha% of the energy
energy = jnp.cumsum(lam) / jnp.sum(lam)
i_end = jnp.arange(energy.shape[0])[energy > alpha][0] + 1
lam = lam[:i_end]
v = v[:, ::-1]
v = v[:, :i_end]
self.lam = lam
self.sqrt_lam = jnp.sqrt(lam)
self.v = v
self.energy = energy
self.num_xi = i_end
def eval_phi(self, x):
"""
Evaluate the eigenfunctions at x.
"""
if self.D is not None:
nq = self.Xq.shape[0]
Xf = jnp.vstack([self.Xq, x])
latent_dist = self.posterior.predict(Xf, train_data=self.D)
m = latent_dist.mean()
C = latent_dist.covariance()
Kc = C[:nq, nq:].T
self.tmp_mu = m[nq:]
else:
Kc = self.prior.kernel.cross_covariance(x, self.Xq)
self.tmp_mu = 0.
phi = jnp.einsum("i,ji,j,rj->ri", 1. / self.lam, self.v, self.wq**0.5, Kc)
return phi
def __call__(self, x, xi):
"""
Evaluate the expansion at x and xi.
"""
phi = self.eval_phi(x)
return self.tmp_mu + jnp.dot(phi, xi * self.sqrt_lam)
Let’s just plot the eigenfunctions/values of the square exponential (SE) covariance function:
k = gpx.kernels.RBF(lengthscale=0.1, n_dims=1)
kle = KarhunenLoeveExpansion(k, nq=5, alpha=.9, input_dim=1)
x = jnp.linspace(0, 1, 100)[:, None]
fig, ax = plt.subplots()
ax.plot(x, kle.eval_phi(x))
ax.set_xlabel('$x$')
ax.set_ylabel('$\phi_i(x)$')
sns.despine(trim=True);
fig, ax = plt.subplots()
ax.plot(kle.lam)
ax.set_xlabel('$i$')
ax.set_ylabel('$\lambda_i$')
sns.despine(trim=True);
Questions#
The estimated eigenfunctions and eigenvalues do not look very accurate. Perhaps, you need to increase the number of quadrature points used in the Nystrom approximation. Try
nq=20
. How do they look now?How are the eigenvalues of the covariance function affected if you decrease the lengthscale?
The default variance of the square exponential is one. Try changing it to 2. What changed, if anything?
Experiment with different covariance functions, e.g., the
PoweredExponential
or theMatern32
.
Varying the lengthscale#
Let’s vary the lengthscale of the SE and see what happens to the eigenvalues.
x = jnp.linspace(0, 1, 100)[:, None]
fig, ax = plt.subplots()
for ell in [0.01, 0.05, 0.1, 0.2, 0.5]:
k = gpx.kernels.RBF(lengthscale=ell)
kle = KarhunenLoeveExpansion(k, nq=100, alpha=.9, input_dim=1)
ax.plot(kle.lam[:5], '-x', markersize=5, markeredgewidth=2, label='$\ell={0:1.2f}$'.format(ell))
ax.legend(loc='best')
ax.set_xlabel('$i$')
ax.set_ylabel('$\lambda_i$')
sns.despine(trim=True);
Sampling from the random field using \(\xi\)#
k = gpx.kernels.PoweredExponential(lengthscale=0.1)
kle = KarhunenLoeveExpansion(k, nq=100, alpha=0.8, input_dim=1)
x = jnp.linspace(0, 1, 100)[:, None]
fig, ax = plt.subplots()
for i in range(3):
key, key_xi = jrandom.split(key)
xi = jrandom.normal(key_xi, (kle.num_xi,))
f = kle(x, xi)
ax.plot(x, f, color=sns.color_palette()[0])
sns.despine(trim=True);
Questions#
Above we show the samples that we get from the KLT using an exponential covariance function. They look too smooth. The samples are supposed to be no-where differentiable. What is the problem?
How many terms did you need to get samples that really look like samples from an exponential GP?
KLE for GP with Observed Data#
Here we take a look at the KLE of a GP where we have made some input/output observations
# Just generate some input/output pairs randomly...
key, key_X, key_y = jrandom.split(key, 3)
X = jrandom.uniform(key_X, shape=(3, 1))
y = jrandom.normal(key_y, shape=(3,))
# X and Y are assumed to be observed
k = gpx.kernels.RBF(lengthscale=0.1)
kle = KarhunenLoeveExpansion(k, nq=100, alpha=0.9, X=X, y=y, input_dim=1)
x = jnp.linspace(0, 1, 100)[:, None]
fig, ax = plt.subplots()
ax.plot(x, kle.eval_phi(x))
ax.set_xlabel('$x$')
ax.set_ylabel('$\phi_i(x)$')
sns.despine(trim=True)
fig, ax = plt.subplots()
ax.plot(X, y, 'kx', markeredgewidth=2)
for i in range(3):
key, key_xi = jrandom.split(key)
xi = jrandom.normal(key_xi, (kle.num_xi,))
f = kle(x, xi)
ax.plot(x, f, color=sns.color_palette()[0])
sns.despine(trim=True);
Questions#
What is the value of the basis functions at the points where we have observations?
Experiment with various covariance functions and hyper-parameters.
Playing in two-dimensions#
Let’s experiment with these ideas in two dimensions.
# Uncomment to see the eigenfunctions when there are observations.
# key, key_X, key_y = jrandom.split(key, 3) # Uncomment me!
# X = jrandom.uniform(key_X, shape=(10, 2)) # Uncomment me!
# y = jrandom.normal(key_y, shape=(10,)) # Uncomment me!
k = gpx.kernels.RBF(lengthscale=0.1, n_dims=2)
kle = KarhunenLoeveExpansion(k, nq=100, alpha=0.9, input_dim=2)#, X=X, y=y) # Uncomment me!
x = jnp.linspace(0, 1, 32)
X1, X2 = jnp.meshgrid(x, x)
X_all = jnp.hstack([X1.flatten()[:, None], X2.flatten()[:, None]])
print('Number of terms:', kle.num_xi)
# Let's look at them
Phi = kle.eval_phi(X_all)
for i in range(5):
fig, ax = plt.subplots()
c = ax.contourf(X1, X2, Phi[:, i].reshape(X1.shape))
# ax.plot(X[:, 0], X[:, 1], 'bx', markeredgewidth=2) # Uncomment me!
fig.colorbar(c)
sns.despine(trim=True);
Number of terms: 49
Questions#
Try plotting some eigenfunctions with higher index.
Try adding some observations.
Now, that you are getting familar, try to plot a few samples from this random field.