Hide code cell source
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib_inline
matplotlib_inline.backend_inline.set_matplotlib_formats('svg')
import seaborn as sns
sns.set_context("paper")
sns.set_style("ticks");

!pip install orthojax --upgrade
!pip install py-design --upgrade
Hide code cell output
Requirement already satisfied: orthojax in /Users/ibilion/.pyenv/versions/3.11.6/lib/python3.11/site-packages (0.1.5)
Requirement already satisfied: jax>=0.4.19 in /Users/ibilion/.pyenv/versions/3.11.6/lib/python3.11/site-packages (from orthojax) (0.4.19)
Requirement already satisfied: numpy in /Users/ibilion/.pyenv/versions/3.11.6/lib/python3.11/site-packages (from orthojax) (1.25.2)
Requirement already satisfied: equinox>=0.11.2 in /Users/ibilion/.pyenv/versions/3.11.6/lib/python3.11/site-packages (from orthojax) (0.11.2)
Requirement already satisfied: jaxtyping>=0.2.20 in /Users/ibilion/.pyenv/versions/3.11.6/lib/python3.11/site-packages (from equinox>=0.11.2->orthojax) (0.2.25)
Requirement already satisfied: typing-extensions>=4.5.0 in /Users/ibilion/.pyenv/versions/3.11.6/lib/python3.11/site-packages (from equinox>=0.11.2->orthojax) (4.8.0)
Requirement already satisfied: ml-dtypes>=0.2.0 in /Users/ibilion/.pyenv/versions/3.11.6/lib/python3.11/site-packages (from jax>=0.4.19->orthojax) (0.3.1)
Requirement already satisfied: opt-einsum in /Users/ibilion/.pyenv/versions/3.11.6/lib/python3.11/site-packages (from jax>=0.4.19->orthojax) (3.3.0)
Requirement already satisfied: scipy>=1.9 in /Users/ibilion/.pyenv/versions/3.11.6/lib/python3.11/site-packages (from jax>=0.4.19->orthojax) (1.11.3)
Requirement already satisfied: typeguard<3,>=2.13.3 in /Users/ibilion/.pyenv/versions/3.11.6/lib/python3.11/site-packages (from jaxtyping>=0.2.20->equinox>=0.11.2->orthojax) (2.13.3)
DEPRECATION: graphql-ws 0.3.0 has a non-standard dependency specifier graphql-core>=2.0<3. pip 24.0 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of graphql-ws or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063
Requirement already satisfied: py-design in /Users/ibilion/.pyenv/versions/3.11.6/lib/python3.11/site-packages (2.0)
DEPRECATION: graphql-ws 0.3.0 has a non-standard dependency specifier graphql-core>=2.0<3. pip 24.0 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of graphql-ws or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063

Polynomial Chaos in Many Dimensions#

Assume that we have a random vector:

\[ \Xi = (\Xi_1, \dots,\Xi_d), \]

where \(\Xi_i\) are independent random variables. The space of interest is the space of squared integrable functions on the probability space of \(\Xi\):

\[ L^2(\Xi) = \left\{ f : \mathbb{R}^d \to \mathbb{R} \mid \int_{\mathbb{R}^d} f(\xi)^2 \, d\mathbb{P}(\xi) < \infty \right\}. \]

To build an orthonormal basis in \(L^2(\Xi)\), we exploit the fact that:

\[ L^2(\Xi) = L^2(\Xi_1) \otimes \dots \otimes L^2(\Xi_d), \]

where the equality is understood as an isomorphism of Hilbert spaces. We can then build an orthonormal basis in \(L^2(\Xi)\) by taking the tensor product of univariate orthonormal bases in \(L^2(\Xi_i)\).

Using the multi-index notation, we can write the basis as:

\[ \left\{ \phi_\alpha(\xi) = \prod_{i=1}^d \phi_{\alpha_i}(\xi_i) \mid \alpha \in \mathbb{N}^d_0\right\}, \]

where \(\phi_{\alpha_i}\) is the univariate orthonormal basis in \(L^2(\Xi_i)\).

Typically, we need to truncate the basis to a finite number of terms. This is done by choosing a maximum degree \(\rho\) and taking only the multi-indices \(\alpha\) such that \(|\alpha| \leq \rho\). The truncated basis is then:

\[ \left\{ \phi_\alpha(\xi) = \prod_{i=1}^d \phi_{\alpha_i}(\xi_i) \mid \alpha \in \mathbb{N}^d_0, |\alpha| \leq \rho \right\}. \]

Example: Legendre Polynomials in 2D#

Let us consider the case where \(\Xi_1\) and \(\Xi_2\) are independent uniform random variables in \([-1,1]\). Here is how we can construct the basis in \(L^2(\Xi)\):

import orthojax as ojax

degree1 = 3
phi1 = ojax.make_legendre_polynomial(degree1)
degree2 = 3
phi2 = ojax.make_legendre_polynomial(degree2)

total_degree = 6
phi = ojax.TensorProduct(total_degree, [phi1, phi2])

Here are the terms that are used to construct the basis:

print(f"Number of basis functions: {phi.num_basis}")
print("Terms in truncated tensor product:")
for i in range(phi.num_basis):
  print(f{i} = φ1{phi.terms[i][0]}⊗φ2{phi.terms[i][1]}")
Number of basis functions: 16
Terms in truncated tensor product:
ψ0 = φ10⊗φ20
ψ1 = φ11⊗φ20
ψ2 = φ10⊗φ21
ψ3 = φ12⊗φ20
ψ4 = φ11⊗φ21
ψ5 = φ10⊗φ22
ψ6 = φ13⊗φ20
ψ7 = φ12⊗φ21
ψ8 = φ11⊗φ22
ψ9 = φ10⊗φ23
ψ10 = φ13⊗φ21
ψ11 = φ12⊗φ22
ψ12 = φ11⊗φ23
ψ13 = φ13⊗φ22
ψ14 = φ12⊗φ23
ψ15 = φ13⊗φ23

You can evaluate it as follows:

import numpy as np

xs = np.random.randn(100, 2)
phis = phi(xs)
print(phis.shape)
(100, 16)

Let’s plot the first few basis functions:

x1 = np.linspace(-1, 1, 100)
x2 = np.linspace(-1, 1, 100)
X1, X2 = np.meshgrid(x1, x2)
X = np.stack([X1, X2], axis=-1)
X_flat = np.reshape(X, [-1, 2])
phis = phi(X_flat)
Z = np.reshape(phis, [100, 100, phi.num_basis])

for i in range(phi.num_basis):
  fig, ax = plt.subplots(figsize=(3, 3))
  ax.contourf(X1, X2, Z[:, :, i], levels=20)
  ax.set_title(f"$\psi_{{{i}}} = \phi_{{1{phi.terms[i][0]}}}\otimes \phi_{{2{phi.terms[i][1]}}}$")
Hide code cell output
../_images/5f0f6594a3f6c470c1f2a524a0b75777bb1097acffe3de3c4627d50a5e9f8aca.svg ../_images/db207db97cafaae0a3ff2cb2b8077b3a1092563980db61a6c423eb43eb6ecd54.svg ../_images/ba877d0c4ab76fcad506e04fd06aa9124d47e1291d3eb23180cec8c56c13b63d.svg ../_images/3bfe8475203f83f4c19c1358d4f3c03bedaf9ec377a730c586227a95dea5cbbf.svg ../_images/592e5e863a942a360293cf36b8597a6f6c98a5e043948506778caf6bf3bc0120.svg ../_images/74e1a2c85966919e2de2ebb23f0738870a9c9fbdbfb1189010be919dd115d6f2.svg ../_images/a0c833e302118a29a6bdb6f36ea86c03513fc96f4e1b9e9c9c2598e48e53faf1.svg ../_images/1b3db71444c1094e37ada2b53a9adb25be51802bad2269860af8efbe31330ceb.svg ../_images/8a1b4ad81d7587ba845ca4aa115078b761711a5f184f446f0121880870ff6ee7.svg ../_images/0a82a0fbee9577fc000cb6bf702abe1e9fffd50bd91278c76454d2b5977c0e12.svg ../_images/d07f1a95fc66fda00334b483ad7274f0e18b03e357ae6862cb8e7cabb50c4585.svg ../_images/9be336fc5049707f2abb2eb76c99d6bf50ccb750682710ce4bec09861de7a17b.svg ../_images/6a33e1785ff938a044c9c7f101b9262f9d626c9b213395614d3e466161313acf.svg ../_images/f269a22bff35b83805e144de1422782c81496c0d64f1f33c307c4e670fe10c6f.svg ../_images/c9d90cebe5f691aaa32d5caba3035ddc21b5ff430cfe60e5bf7c5fc49882335c.svg ../_images/8b3b35bbaab52ef6c5a398687893fd6d9100f91ee5abe21cfa969ce55fc6de6a.svg

Let’s verify orthogonality. We need a quadrature rule in 2D. The simplest one is the tensor product of 1D quadrature rules. This would work, but it results to way to many quadrature points and it is not going to work in higher dimensions.

Instead, We are going to use something called Smolyak sparse grid quadrature. This is a quadrature rule that is based on a tensor product of nested 1D quadrature rules, but it only uses a subset of the tensor product. Nested quadrature rules are quadrature rules that are nested in the sense that the quadrature points of a rule of level \(\ell\) are a subset of the quadrature points of a rule of order \(\ell+1\).

Here is the Smolyak sparse grid quadrature rule of level 5 in 2D so that you can integrate the Legendre polynomials:

import design

num_dim = 2
level = 7
xs, ws = design.sparse_grid(num_dim, level, 'F1')
# The normalization below is necessary. I don't have time
# to change the code in py-design.
ws = ws / (2 ** num_dim)

Here are the quadrature points:

fig, ax = plt.subplots()
ax.plot(xs[:, 0], xs[:, 1], '.', ms=1)
ax.set_title(f"Level {level} sparse grid points");
ax.set_xlabel("$\\xi_1$")
ax.set_ylabel("$\\xi_2$");
../_images/c7725bd9bf50ede271992925c4a1334ed3fa359c057ff2301b242ffb657295e5.svg

Here is how we can do the inner products:

phis = phi(xs)
for i in range(phi.num_basis):
    for j in range(i, phi.num_basis):
        dot_ij = np.einsum("i,i,i->", ws, phis[:, i], phis[:, j])
        print(f"<ψ{i}{j}> \t= {dot_ij:.3f}")
<ψ0,ψ0> 	= 1.000
<ψ0,ψ1> 	= -0.000
<ψ0,ψ2> 	= -0.000
<ψ0,ψ3> 	= 0.000
<ψ0,ψ4> 	= 0.000
<ψ0,ψ5> 	= 0.000
<ψ0,ψ6> 	= -0.000
<ψ0,ψ7> 	= 0.000
<ψ0,ψ8> 	= 0.000
<ψ0,ψ9> 	= -0.000
<ψ0,ψ10> 	= -0.000
<ψ0,ψ11> 	= -0.031
<ψ0,ψ12> 	= -0.000
<ψ0,ψ13> 	= -0.000
<ψ0,ψ14> 	= -0.000
<ψ0,ψ15> 	= 0.000
<ψ1,ψ1> 	= 1.000
<ψ1,ψ2> 	= -0.000
<ψ1,ψ3> 	= -0.000
<ψ1,ψ4> 	= -0.000
<ψ1,ψ5> 	= -0.000
<ψ1,ψ6> 	= -0.000
<ψ1,ψ7> 	= -0.000
<ψ1,ψ8> 	= -0.028
<ψ1,ψ9> 	= 0.000
<ψ1,ψ10> 	= -0.000
<ψ1,ψ11> 	= 0.000
<ψ1,ψ12> 	= -0.000
<ψ1,ψ13> 	= -0.032
<ψ1,ψ14> 	= -0.000
<ψ1,ψ15> 	= -0.000
<ψ2,ψ2> 	= 1.000
<ψ2,ψ3> 	= -0.000
<ψ2,ψ4> 	= -0.000
<ψ2,ψ5> 	= -0.000
<ψ2,ψ6> 	= 0.000
<ψ2,ψ7> 	= -0.028
<ψ2,ψ8> 	= -0.000
<ψ2,ψ9> 	= -0.000
<ψ2,ψ10> 	= -0.000
<ψ2,ψ11> 	= 0.000
<ψ2,ψ12> 	= -0.000
<ψ2,ψ13> 	= -0.000
<ψ2,ψ14> 	= -0.032
<ψ2,ψ15> 	= -0.000
<ψ3,ψ3> 	= 1.000
<ψ3,ψ4> 	= 0.000
<ψ3,ψ5> 	= -0.031
<ψ3,ψ6> 	= 0.000
<ψ3,ψ7> 	= -0.000
<ψ3,ψ8> 	= 0.000
<ψ3,ψ9> 	= -0.000
<ψ3,ψ10> 	= -0.000
<ψ3,ψ11> 	= -0.025
<ψ3,ψ12> 	= 0.000
<ψ3,ψ13> 	= -0.000
<ψ3,ψ14> 	= -0.000
<ψ3,ψ15> 	= 0.000
<ψ4,ψ4> 	= 0.975
<ψ4,ψ5> 	= 0.000
<ψ4,ψ6> 	= 0.000
<ψ4,ψ7> 	= -0.000
<ψ4,ψ8> 	= -0.000
<ψ4,ψ9> 	= 0.000
<ψ4,ψ10> 	= -0.029
<ψ4,ψ11> 	= 0.000
<ψ4,ψ12> 	= -0.029
<ψ4,ψ13> 	= 0.000
<ψ4,ψ14> 	= 0.000
<ψ4,ψ15> 	= -0.026
<ψ5,ψ5> 	= 1.000
<ψ5,ψ6> 	= -0.000
<ψ5,ψ7> 	= 0.000
<ψ5,ψ8> 	= -0.000
<ψ5,ψ9> 	= 0.000
<ψ5,ψ10> 	= 0.000
<ψ5,ψ11> 	= -0.025
<ψ5,ψ12> 	= -0.000
<ψ5,ψ13> 	= -0.000
<ψ5,ψ14> 	= -0.000
<ψ5,ψ15> 	= 0.000
<ψ6,ψ6> 	= 1.000
<ψ6,ψ7> 	= -0.000
<ψ6,ψ8> 	= -0.032
<ψ6,ψ9> 	= 0.000
<ψ6,ψ10> 	= -0.000
<ψ6,ψ11> 	= -0.000
<ψ6,ψ12> 	= -0.000
<ψ6,ψ13> 	= -0.044
<ψ6,ψ14> 	= -0.000
<ψ6,ψ15> 	= -0.000
<ψ7,ψ7> 	= 0.978
<ψ7,ψ8> 	= 0.000
<ψ7,ψ9> 	= -0.032
<ψ7,ψ10> 	= 0.000
<ψ7,ψ11> 	= -0.000
<ψ7,ψ12> 	= 0.000
<ψ7,ψ13> 	= 0.000
<ψ7,ψ14> 	= -0.019
<ψ7,ψ15> 	= -0.000
<ψ8,ψ8> 	= 0.978
<ψ8,ψ9> 	= -0.000
<ψ8,ψ10> 	= 0.000
<ψ8,ψ11> 	= -0.000
<ψ8,ψ12> 	= 0.000
<ψ8,ψ13> 	= -0.019
<ψ8,ψ14> 	= 0.000
<ψ8,ψ15> 	= -0.000
<ψ9,ψ9> 	= 1.000
<ψ9,ψ10> 	= -0.000
<ψ9,ψ11> 	= -0.000
<ψ9,ψ12> 	= -0.000
<ψ9,ψ13> 	= -0.000
<ψ9,ψ14> 	= -0.044
<ψ9,ψ15> 	= -0.000
<ψ10,ψ10> 	= 0.961
<ψ10,ψ11> 	= 0.000
<ψ10,ψ12> 	= -0.026
<ψ10,ψ13> 	= -0.000
<ψ10,ψ14> 	= -0.000
<ψ10,ψ15> 	= -0.044
<ψ11,ψ11> 	= 0.987
<ψ11,ψ12> 	= 0.000
<ψ11,ψ13> 	= 0.000
<ψ11,ψ14> 	= 0.000
<ψ11,ψ15> 	= -0.000
<ψ12,ψ12> 	= 0.961
<ψ12,ψ13> 	= -0.000
<ψ12,ψ14> 	= -0.000
<ψ12,ψ15> 	= -0.044
<ψ13,ψ13> 	= 0.966
<ψ13,ψ14> 	= 0.000
<ψ13,ψ15> 	= 0.000
<ψ14,ψ14> 	= 0.966
<ψ14,ψ15> 	= 0.000
<ψ15,ψ15> 	= 0.939

You want the level high enough so that these are accurate to some extent. This looks correct.

Hermite Polynomials in 2D#

Let’s do the same thing for normal random variables. Here \(\Xi_1\) and \(\Xi_2\) are independent standard normal random variables and \(\Xi = (\Xi_1, \Xi_2)\).

Make the polynomials:

degree1 = 3
phi1 = ojax.make_hermite_polynomial(degree1)
degree2 = 3
phi2 = ojax.make_hermite_polynomial(degree2)

total_degree = 6
phi = ojax.TensorProduct(total_degree, [phi1, phi2])

Let’s plot them:

x1 = np.linspace(-3, 3, 100)
x2 = np.linspace(-3, 3, 100)
X1, X2 = np.meshgrid(x1, x2)
X = np.stack([X1, X2], axis=-1)
X_flat = np.reshape(X, [-1, 2])
phis = phi(X_flat)
Z = np.reshape(phis, [100, 100, phi.num_basis])

for i in range(phi.num_basis):
  fig, ax = plt.subplots(figsize=(3, 3))
  ax.contourf(X1, X2, Z[:, :, i], levels=20)
  ax.set_title(f"$\psi_{{{i}}} = \phi_{{1{phi.terms[i][0]}}}\otimes \phi_{{2{phi.terms[i][1]}}}$")
../_images/80567fef2c7b5e80fd072057408b45e82f58e96bce89b4ea663c3391645e6661.svg ../_images/0618e904f173feeb05da32062dcd4a47a7a39b7543b59ad713b32a17f12a613c.svg ../_images/772b0bc67de56d4e0ad337d913567570ec15b4cefbf6940a3e9ed08716eba4f5.svg ../_images/3fd6df7a3726dd08ed4726d3e44a2487bd4aa6a0c38d2212803f2eedaa2604a5.svg ../_images/5b45db33c5eea56b5c7fa819cd92201358d557e00845b3b16ef2ea40d8b1d32f.svg ../_images/3e3dfe8a62166184de827a385e174c656f87b5efcaa3e55904354590a0ee8d63.svg ../_images/2a01ef48933df0fd741155806fecbbff556788360ecd2d3d20f424d7fc4f6eb4.svg ../_images/f721ee6149d7b1f55d4ad2f6af4d030c48b7213a4ca35a6429b5174e91101c6d.svg ../_images/c43354f24a4de35944486325f0c23aec2a13f7dcdc589022ddb73f4a5ba8fdc8.svg ../_images/3ee7eadb3126e3509112d87194f771701aae38b297d190497dd4a1e364db2899.svg ../_images/ea7c075f455f32a440a19e540b047f958ca412334c7c3bfb7a01be631b279aac.svg ../_images/b9211433a7e4088ea35f78e4a355254c8424d2cca10c9d43edb7cad59bbcfb67.svg ../_images/70c75d9da9836c8b2ead6efe05aeda54dd72a007726e71d9e2b17d4780a542ae.svg ../_images/a738dd5a608e7ec70d8d853aa0a7e92d1c51e5eea63a688cfc005a48a24e5b34.svg ../_images/c00288d478fd06706edca7919e30ebae76388351662e9169563dfaa3fd6623ec.svg ../_images/985aa32a7fd80dac4d7fb2e4ed0718bd6e2e6a31b00374ec67ceea4609145427.svg

Let’s test orthogonality. Here we need also a sparse grid but for Hermite polynomials.

num_dim = 2
level = 6
xs, ws = design.sparse_grid(num_dim, level, 'GH') # Notice that I changed CC to GH
# The normalization below is necessary. I don't have time
# to change the code in py-design.
xs = xs * np.sqrt(2)
ws = ws / np.sqrt(np.pi ** num_dim)

phis = phi(xs)
for i in range(phi.num_basis):
    for j in range(i, phi.num_basis):
        dot_ij = np.einsum("i,i,i->", ws, phis[:, i], phis[:, j])
        print(f"<ψ{i}{j}> \t= {dot_ij:.3f}")
<ψ0,ψ0> 	= 1.000
<ψ0,ψ1> 	= -0.000
<ψ0,ψ2> 	= -0.000
<ψ0,ψ3> 	= 0.000
<ψ0,ψ4> 	= 0.000
<ψ0,ψ5> 	= 0.000
<ψ0,ψ6> 	= -0.000
<ψ0,ψ7> 	= 0.000
<ψ0,ψ8> 	= 0.000
<ψ0,ψ9> 	= -0.000
<ψ0,ψ10> 	= -0.000
<ψ0,ψ11> 	= 0.000
<ψ0,ψ12> 	= -0.000
<ψ0,ψ13> 	= -0.000
<ψ0,ψ14> 	= -0.000
<ψ0,ψ15> 	= -0.000
<ψ1,ψ1> 	= 1.000
<ψ1,ψ2> 	= 0.000
<ψ1,ψ3> 	= -0.000
<ψ1,ψ4> 	= -0.000
<ψ1,ψ5> 	= -0.000
<ψ1,ψ6> 	= -0.000
<ψ1,ψ7> 	= -0.000
<ψ1,ψ8> 	= 0.000
<ψ1,ψ9> 	= 0.000
<ψ1,ψ10> 	= 0.000
<ψ1,ψ11> 	= -0.000
<ψ1,ψ12> 	= -0.000
<ψ1,ψ13> 	= -0.000
<ψ1,ψ14> 	= -0.000
<ψ1,ψ15> 	= -0.000
<ψ2,ψ2> 	= 1.000
<ψ2,ψ3> 	= -0.000
<ψ2,ψ4> 	= -0.000
<ψ2,ψ5> 	= -0.000
<ψ2,ψ6> 	= 0.000
<ψ2,ψ7> 	= 0.000
<ψ2,ψ8> 	= -0.000
<ψ2,ψ9> 	= -0.000
<ψ2,ψ10> 	= -0.000
<ψ2,ψ11> 	= -0.000
<ψ2,ψ12> 	= 0.000
<ψ2,ψ13> 	= -0.000
<ψ2,ψ14> 	= -0.000
<ψ2,ψ15> 	= -0.000
<ψ3,ψ3> 	= 1.000
<ψ3,ψ4> 	= 0.000
<ψ3,ψ5> 	= 0.000
<ψ3,ψ6> 	= 0.000
<ψ3,ψ7> 	= -0.000
<ψ3,ψ8> 	= -0.000
<ψ3,ψ9> 	= -0.000
<ψ3,ψ10> 	= -0.000
<ψ3,ψ11> 	= 0.000
<ψ3,ψ12> 	= -0.000
<ψ3,ψ13> 	= -0.000
<ψ3,ψ14> 	= -0.000
<ψ3,ψ15> 	= 0.000
<ψ4,ψ4> 	= 1.000
<ψ4,ψ5> 	= 0.000
<ψ4,ψ6> 	= 0.000
<ψ4,ψ7> 	= -0.000
<ψ4,ψ8> 	= -0.000
<ψ4,ψ9> 	= 0.000
<ψ4,ψ10> 	= -0.000
<ψ4,ψ11> 	= -0.000
<ψ4,ψ12> 	= -0.000
<ψ4,ψ13> 	= 0.000
<ψ4,ψ14> 	= 0.000
<ψ4,ψ15> 	= -0.000
<ψ5,ψ5> 	= 1.000
<ψ5,ψ6> 	= -0.000
<ψ5,ψ7> 	= -0.000
<ψ5,ψ8> 	= -0.000
<ψ5,ψ9> 	= 0.000
<ψ5,ψ10> 	= -0.000
<ψ5,ψ11> 	= 0.000
<ψ5,ψ12> 	= -0.000
<ψ5,ψ13> 	= -0.000
<ψ5,ψ14> 	= -0.000
<ψ5,ψ15> 	= 0.000
<ψ6,ψ6> 	= 1.000
<ψ6,ψ7> 	= -0.000
<ψ6,ψ8> 	= -0.000
<ψ6,ψ9> 	= 0.000
<ψ6,ψ10> 	= -0.000
<ψ6,ψ11> 	= -0.000
<ψ6,ψ12> 	= -0.000
<ψ6,ψ13> 	= 0.000
<ψ6,ψ14> 	= -0.000
<ψ6,ψ15> 	= -0.000
<ψ7,ψ7> 	= 1.000
<ψ7,ψ8> 	= -0.000
<ψ7,ψ9> 	= -0.000
<ψ7,ψ10> 	= 0.000
<ψ7,ψ11> 	= -0.000
<ψ7,ψ12> 	= 0.000
<ψ7,ψ13> 	= -0.000
<ψ7,ψ14> 	= -0.000
<ψ7,ψ15> 	= -0.000
<ψ8,ψ8> 	= 1.000
<ψ8,ψ9> 	= -0.000
<ψ8,ψ10> 	= 0.000
<ψ8,ψ11> 	= -0.000
<ψ8,ψ12> 	= 0.000
<ψ8,ψ13> 	= -0.000
<ψ8,ψ14> 	= -0.000
<ψ8,ψ15> 	= -0.000
<ψ9,ψ9> 	= 1.000
<ψ9,ψ10> 	= -0.000
<ψ9,ψ11> 	= -0.000
<ψ9,ψ12> 	= -0.000
<ψ9,ψ13> 	= -0.000
<ψ9,ψ14> 	= 0.000
<ψ9,ψ15> 	= -0.000
<ψ10,ψ10> 	= 1.000
<ψ10,ψ11> 	= -0.000
<ψ10,ψ12> 	= 0.000
<ψ10,ψ13> 	= -0.000
<ψ10,ψ14> 	= -0.000
<ψ10,ψ15> 	= -0.000
<ψ11,ψ11> 	= 1.000
<ψ11,ψ12> 	= -0.000
<ψ11,ψ13> 	= 0.000
<ψ11,ψ14> 	= 0.000
<ψ11,ψ15> 	= -0.000
<ψ12,ψ12> 	= 1.000
<ψ12,ψ13> 	= -0.000
<ψ12,ψ14> 	= -0.000
<ψ12,ψ15> 	= -0.000
<ψ13,ψ13> 	= 1.000
<ψ13,ψ14> 	= -0.000
<ψ13,ψ15> 	= 0.000
<ψ14,ψ14> 	= 1.000
<ψ14,ψ15> 	= 0.000
<ψ15,ψ15> 	= 1.000

Dealing with a mix of random variables#

Above we had either only uniform or only normal random variables. If you have mixed versions, you have three choices:

  • Map all random variables to uniform and use the recipe above. You can do this as follows. Take \(\Xi_i\) following a distribution with CDF \(F_i\). Then \(\Xi_i = F_i^{-1}(U_i)\), where \(U_i\) is a uniform random variable in \([0,1]\). Then work with \(U_i\).

  • Use a sparse grid quadrature rule that is based on a tensor product of nested quadrature rules for each random variable. I have not implemented this due to lack of time. But it is implemented in the package chaospy. The only problem with chaospy is that it is not in jax.

Dealing with Random Variables that are Not Independent#

If you have a multi-variate normal:

\[ \Xi \sim N(\mu, \Sigma), \]

you can use the fact that:

\[ \Xi = \mu + L Z, \]

where

\[ \Sigma = L L^T, \]

and \(Z\) is a standard normal random variable. Furthermore, you can go to uniform random variables by using the CDF of the normal distribution. That is, we can write:

\[ \Xi = \mu + L \Phi^{-1}(U), \]

where \(U\) a vector of independent uniform random variables in \([-1,1]\).

For a more general distribution, you have to use the Rosenblatt transformation. More about it here.