Functional Inputs to Scientific Models#
PDE Solvers as Operators#
Consider the elliptic PDE
with Dirichlet boundary conditions
where \(\Omega\) is a domain in \(\mathbb{R}^d\) and \(a, f, g\) are scalar functions on \(\Omega\).
Let’s think a bit about the solver. What sort of object is it? Consider the space of scalar functions on \(\Omega\):
Similarly, let \(\mathcal{A}\), \(\mathcal{F}\), and \(\mathcal{G}\) be the spaces of scalar functions on \(\Omega\) for \(a\), \(f\), and \(g\) respectively. The solver is a map
that takes the coefficients \(a\), \(f\), and \(g\) and returns the solution \(u\):
We call maps with functional inputs and outputs operators. The solver is an operator that maps the coefficients to the solution.
Uncertainty in functional inputs#
We may be uncertain about the functioanal inputs to a scientific model. For example, we may not know the exact values of the thermal conductivity \(k\). We can model this uncertainty by considering \(k\) as a random field. Similarly for \(f\) and \(g\).
The most common choice of random field is a Gaussian process. Now, the thermal conductivity \(k\) is positive, so we can model it as a log-Gaussian process. We say:
and we can set
where \(m\) is a mean function and \(k\) is a covariance function. Recall that you can use the mean function to encode information about trends and the covariance function to encode information about smoothness.
Uncertainty propagation with Monte Carlo#
Recall that you can sample Gaussian random fields wherever you like. So, you can sample \(h\) at a set of points \(\{x_i\}_{i=1}^n\). Then, you can compute the thermal conductivity at these points:
where the function values:
are drawn from a multivariate normal distribution:
with
and
If you pick the points \(\{x_i\}_{i=1}^n\) to be the nodes of a finite element mesh, then you can compute the solution \(u\) at these points. You can then use the finite element solver to interpolate the solution to the entire domain. You can repeat this process many times to get a Monte Carlo estimate of the statistics of the solution.
The curse of dimensionality#
Unfortunately, you cannot simply build a surrogate model that will take you from \(\mathbf{h}\) (and the discretized versions of the other functional inputs) to the solution \(u\) directly. Polynomial chaos, neural networks, and Gaussian processes all suffer from the curse of dimensionality. You will need a large number of samples to get a good surrogate model. This is called the curse of dimensionality. In a later lesson, we will discuss how operator learning can help you overcome this issue. For now, we will discuss another strategy that relies on reducing the dimensionality of the inputs.