Quantifying Epistemic Uncertainty in Monte Carlo Estimates

Contents

Hide code cell source
MAKE_BOOK_FIGURES=Trueimport numpy as npimport scipy.stats as stimport matplotlib as mplimport matplotlib.pyplot as plt%matplotlib inlineimport matplotlib_inlinematplotlib_inline.backend_inline.set_matplotlib_formats('svg')import seaborn as snssns.set_context("paper")sns.set_style("ticks")def set_book_style():    plt.style.use('seaborn-v0_8-white')     sns.set_style("ticks")    sns.set_palette("deep")    mpl.rcParams.update({        # Font settings        'font.family': 'serif',  # For academic publishing        'font.size': 8,  # As requested, 10pt font        'axes.labelsize': 8,        'axes.titlesize': 8,        'xtick.labelsize': 7,  # Slightly smaller for better readability        'ytick.labelsize': 7,        'legend.fontsize': 7,                # Line and marker settings for consistency        'axes.linewidth': 0.5,        'grid.linewidth': 0.5,        'lines.linewidth': 1.0,        'lines.markersize': 4,                # Layout to prevent clipped labels        'figure.constrained_layout.use': True,                # Default DPI (will override when saving)        'figure.dpi': 600,        'savefig.dpi': 600,                # Despine - remove top and right spines        'axes.spines.top': False,        'axes.spines.right': False,                # Remove legend frame        'legend.frameon': False,                # Additional trim settings        'figure.autolayout': True,  # Alternative to constrained_layout        'savefig.bbox': 'tight',    # Trim when saving        'savefig.pad_inches': 0.1   # Small padding to ensure nothing gets cut off    })def set_notebook_style():    plt.style.use('seaborn-v0_8-white')    sns.set_style("ticks")    sns.set_palette("deep")    mpl.rcParams.update({        # Font settings - using default sizes        'font.family': 'serif',        'axes.labelsize': 10,        'axes.titlesize': 10,        'xtick.labelsize': 9,        'ytick.labelsize': 9,        'legend.fontsize': 9,                # Line and marker settings        'axes.linewidth': 0.5,        'grid.linewidth': 0.5,        'lines.linewidth': 1.0,        'lines.markersize': 4,                # Layout settings        'figure.constrained_layout.use': True,                # Remove only top and right spines        'axes.spines.top': False,        'axes.spines.right': False,                # Remove legend frame        'legend.frameon': False,                # Additional settings        'figure.autolayout': True,        'savefig.bbox': 'tight',        'savefig.pad_inches': 0.1    })def save_for_book(fig, filename, is_vector=True, **kwargs):    """    Save a figure with book-optimized settings.        Parameters:    -----------    fig : matplotlib figure        The figure to save    filename : str        Filename without extension    is_vector : bool        If True, saves as vector at 1000 dpi. If False, saves as raster at 600 dpi.    **kwargs : dict        Additional kwargs to pass to savefig    """        # Set appropriate DPI and format based on figure type    if is_vector:        dpi = 1000        ext = '.pdf'    else:        dpi = 600        ext = '.tif'        # Save the figure with book settings    fig.savefig(f"{filename}{ext}", dpi=dpi, **kwargs)def make_full_width_fig():    return plt.subplots(figsize=(4.7, 2.9), constrained_layout=True)def make_half_width_fig():    return plt.subplots(figsize=(2.35, 1.45), constrained_layout=True)if MAKE_BOOK_FIGURES:    set_book_style()else:    set_notebook_style()make_full_width_fig = make_full_width_fig if MAKE_BOOK_FIGURES else lambda: plt.subplots()make_half_width_fig = make_half_width_fig if MAKE_BOOK_FIGURES else lambda: plt.subplots()

Quantifying Epistemic Uncertainty in Monte Carlo Estimates#

We now show how to quantify the epistemic uncertainty of Monte Carlo estimates using the CLT. Remember that we are working with an expectation of the form:

\[ I = \mathbb{E}[g(X)]=\int g(x) p(x) dx, \]

where \(X\sim p(x)\) and \(g(x)\) is a function of \(x\). Our sampling-based approximation starts by taking \(X_1, X_2,\dots\) be independent copies of \(X\). Then, it uses the random variables \(Y_1 = g(X_1), Y_2 = g(X_2), \dots\), which are also independent and identically distributed. Invoking the strong law of large states, we saw that the sampling average of the \(Y_i\)’s converges to their mean:

\[ \bar{I}_N=\frac{g(X_1)+\dots+g(X_N)}{N}=\frac{Y_1+\dots+Y_N}{N}\rightarrow I,\;\text{a.s.} \]

Note that the variables \(Y_i = g(X_i)\) are independent identical distributed with mean:

\[ \mathbb{E}[Y_i] = \mathbb{E}[g(X_i)] = I. \]

Assume that their variance is finite, i.e.,

\[ \mathbb{V}[Y_i] = \sigma^2 < +\infty. \]

Yes, a random variable can have an infinite variance. The CLT would not work in that case. Okay. If the variance of the \(Y_i\)’s is indeed finite, the CLT applies for them, and you get that their sampling average \(\bar{I}_N\) becomes approximately normally distributed for large \(N\), i.e.,

\[ \bar{I}_N \sim N\left(I, \frac{\sigma^2}{N}\right), \]

for large \(N\). Now, we may rewrite this equation as follows:

\[ \bar{I}_N = I + \frac{\sigma}{\sqrt{N}}Z, \]

where \(Z\sim N(0,1)\) is a standard normal, recall Lecture 4. It’s like saying \(I_N\) is \(I\) plus some zero mean noise with a given variance. But it is not ad hoc; this is precisely what the CLT says. Now take this equation and solve for \(I\):

\[ I = \bar{I}_N - \frac{\sigma}{\sqrt{N}}Z. \]

This says that the actual value of the expectation \(I\) is \(I_N\) minus some zero mean noise with a given variance. Going back to distributions:

\[ I \sim N\left(\bar{I}_N, \frac{\sigma^2}{N}\right), \]

where the minus sign disappears because \(Z\) and \(-Z\) have the same distribution (standard normal). We are after this expression, except we need to know what \( \sigma^2 \) is. Well, let’s approximate it also with a sampling average! We did this already in Lecture 8. Set:

\[ \bar{\sigma}_N^2 = \frac{1}{N}\sum_{j=1}^Ng^2(X_j) - \bar{I}_N^2. \]

Now we can say that:

\[ I \sim N\left(\bar{I}_N, \frac{\bar{\sigma}^2_N}{N}\right). \]

It would help if you kept in mind that this is only valid for large \(N\).

It is also possible to get a predictive interval. We can write something like:

\[ I \approx \bar{I}_N \pm \frac{2}{\sqrt{N}}\bar{\sigma}_N, \]

with (about) \(95\%\) probability.

Alright, let’s see this in practice.

Hide code cell source
# The function of x we would like to consider
g = lambda x: (np.cos(50 * x) + np.sin(20 * x)) ** 2

# Number of samples to take
N = 100

# Generate samples from X
x_samples = np.random.rand(N)

# Get the corresponding Y's
y_samples = g(x_samples)

# Evaluate the sample average for all sample sizes 
I_running = np.cumsum(y_samples) / np.arange(1, N + 1)

# Evaluate the sample average for the squared of Y
g2_running = np.cumsum(y_samples ** 2) / np.arange(1, N + 1)

# Evaluate the running average of the variance
sigma2_running = g2_running - I_running ** 2

# Alright, now we have quantified our uncertainty about I for every N
# from a single MC run. Let's plot a (about) 95% predictive interval
# Running lower bound for the predictive interval
I_lower_running = (
    I_running - 2.0 * np.sqrt(sigma2_running / np.arange(1, N + 1))
)

# Running upper bound for the predictive interval
I_upper_running = (
    I_running + 2.0 * np.sqrt(sigma2_running / np.arange(1, N + 1))
)

# A common plot for all estimates
fig, ax = plt.subplots()
# Shaded area for the interval
ax.fill_between(
    np.arange(1, N + 1),
    I_lower_running,
    I_upper_running,
    alpha=0.25
)
# Here is the MC estimate:
ax.plot(np.arange(1, N+1), I_running, 'b', lw=2)
# The true value
ax.plot(np.arange(1, N+1), [0.965] * N, color='r')
# and the labels
ax.set_xlabel('$N$')
ax.set_ylabel(r'$\bar{I}_N$')
sns.despine(trim=True);
../_images/1915ca2db3277e37290e43bac7c7d9a1e35a98da3c4d29fe55580397d7dd4bd0.svg

Questions#

  • Increase N until you get an answer close enough to the correct answer (the red line). Notice how the epistemic error bars shrink around the actual value.