Show code cell source
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib_inline
matplotlib_inline.backend_inline.set_matplotlib_formats('svg')
import seaborn as sns
sns.set_context("paper")
sns.set_style("ticks");
Quantifying Epistemic Uncertainty in Monte Carlo Estimates#
We now show how to quantify the epistemic uncertainty of Monte Carlo estimates using the CLT. Remember that we are working with an expectation of the form:
where \(X\sim p(x)\) and \(g(x)\) is a function of \(x\). Our sampling-based approximation starts by taking \(X_1, X_2,\dots\) be independent copies of \(X\). Then, it uses the random variables \(Y_1 = g(X_1), Y_2 = g(X_2), \dots\), which are also independent and identically distributed. Invoking the strong law of large states, we saw that the sampling average of the \(Y_i\)’s converges to their mean:
Note that the variables \(Y_i = g(X_i)\) are independent identical distributed with mean:
Assume that their variance is finite, i.e.,
Yes, a random variable can have an infinite variance. The CLT would not work in that case. Okay. If the variance of the \(Y_i\)’s is indeed finite, the CLT applies for them, and you get that their sampling average \(\bar{I}_N\) becomes approximately normally distributed for large \(N\), i.e.,
for large \(N\). Now, we may rewrite this equation as follows:
where \(Z\sim N(0,1)\) is a standard normal, recall Lecture 4. It’s like saying \(I_N\) is \(I\) plus some zero mean noise with a given variance. But it is not ad hoc; this is precisely what the CLT says. Now take this equation and solve for \(I\):
This says that the actual value of the expectation \(I\) is \(I_N\) minus some zero mean noise with a given variance. Going back to distributions:
where the minus sign disappears because \(Z\) and \(-Z\) have the same distribution (standard normal). We are after this expression, except we need to know what \( \sigma^2 \) is. Well, let’s approximate it also with a sampling average! We did this already in Lecture 8. Set:
Now we can say that:
It would help if you kept in mind that this is only valid for large \(N\).
It is also possible to get a predictive interval. We can write something like:
with (about) \(95\%\) probability.
Alright, let’s see this in practice.
Show code cell source
# The function of x we would like to consider
g = lambda x: (np.cos(50 * x) + np.sin(20 * x)) ** 2
# Number of samples to take
N = 100
# Generate samples from X
x_samples = np.random.rand(N)
# Get the corresponding Y's
y_samples = g(x_samples)
# Evaluate the sample average for all sample sizes
I_running = np.cumsum(y_samples) / np.arange(1, N + 1)
# Evaluate the sample average for the squared of Y
g2_running = np.cumsum(y_samples ** 2) / np.arange(1, N + 1)
# Evaluate the running average of the variance
sigma2_running = g2_running - I_running ** 2
# Alright, now we have quantified our uncertainty about I for every N
# from a single MC run. Let's plot a (about) 95% predictive interval
# Running lower bound for the predictive interval
I_lower_running = (
I_running - 2.0 * np.sqrt(sigma2_running / np.arange(1, N + 1))
)
# Running upper bound for the predictive interval
I_upper_running = (
I_running + 2.0 * np.sqrt(sigma2_running / np.arange(1, N + 1))
)
# A common plot for all estimates
fig, ax = plt.subplots()
# Shaded area for the interval
ax.fill_between(
np.arange(1, N + 1),
I_lower_running,
I_upper_running,
alpha=0.25
)
# Here is the MC estimate:
ax.plot(np.arange(1, N+1), I_running, 'b', lw=2)
# The true value
ax.plot(np.arange(1, N+1), [0.965] * N, color='r')
# and the labels
ax.set_xlabel('$N$')
ax.set_ylabel(r'$\bar{I}_N$')
sns.despine(trim=True);
Questions#
Increase
N
until you get an answer close enough to the correct answer (the red line). Notice how the epistemic error bars shrink around the actual value.