Bayesian Linear Regression#
Probabilistic regression I (maximum likelihood)#
I will show you how to derive least squares from the maximum likelihood principle. Recall that the maximum likelihood principle states that you should pick the model parameters that maximize the probability of the data conditioned on the parameters.
Just like before assume that we have \(n\) observations of inputs \(\mathbf{x}_{1:n}\) and outputs \(\mathbf{y}_{1:n}\). We model the map between inputs and outputs using a generalized linear model with \(m\) basis functions:
Now, here is the difference from what we did before. Instead of directly picking a loss function to minimize, we develop a probabilistic description of the measurement process. In particular, we model the measurement process using a likelihood function:
What is the interpretation of the likelihood function? Well, \(p(\mathbf{y}_{1:n} | \mathbf{x}_{1:n}, \mathbf{w})\) tells us how plausible is it to observe \(\mathbf{y}_{1:n}\) at inputs \(\mathbf{x}_{1:n}\), if we know that the model parameters are \(\mathbf{w}\).
The most common choice for the likelihood of a single measurement is to pick it to be Gaussian with mean around the model prediction \(\mathbf{w^{T}\boldsymbol{\phi}(\mathbf{x})}\) and noise variance \(\sigma^2\). Mathematically, we have:
where \(\sigma^2\) models the variance of the measurement noise. Note that here I used the notation \(N(y|\mu,\sigma^2)\) to denote the PDF of a Normal with mean \(\mu\) and variance \(\sigma^2\), i.e.,
Since in almost all the cases we encounter, the measurements are independent when conditioned on the model, the likelihood of the data factorizes as follows:
where \(\boldsymbol{\Phi}\) is the \(n\times m\) design matrix.
We can apply the maximum likelihood principle to find all the parameters. We can do this for the weight vector \(\mathbf{w}\) and the measurement variance \(\sigma^2\). We need to solve the following optimization problem:
Notice that the rightmost part is the negative of the sum of square errors. So, by maximizing the likelihood with respect to \(\mathbf{w}\), we are minimizing the sum of square errors. In other words, the maximum likelihood and the least square weights are the same! We do not even have to do anything further. The weights should satisfy this linear system:
This result is reassuring. The probabilistic interpretation above gives the same solution as least squares! But there is more. Notice that it can also give us an estimate for the measurement noise variance \(\sigma^2\). All you have to do is maximize likelihood with respect to \(\sigma^2\). If we take the derivative of the log-likelihood with respect to \(\sigma^2\), set it equal to zero, and solve for \(\sigma^2\), we get:
Finally, you can incorporate this measurement uncertainty when you are making predictions. We do this through the point predictive distribution, which is Gaussian in our case:
In other words, your prediction about the measured output \(y\) is that it is Normally distributed around your model prediction with a variance \(\sigma^2\). You can use this to find a 95% credible interval.
Examples#
See this example.
Probabilistic regression II (maximum a posteriori estimates)#
This version of probabilistic regression is similar to maximum likelihood in that you maximize the log probability of something (the posterior instead of the likelihood) and has the benefit that it can help you avoid overfitting.
Just like before, we wish to model the data using some fixed basis/features:
Again, we model the measurement process using a likelihood function:
The new ingredient is that we model the uncertainty in the model parameters using a prior:
Gaussian Prior on the Weights#
The Gaussian prior is the most straightforward possible choice for the weights. It is:
The interpretation is that, before we see the data, we believe that \(\mathbf{w}\) must be around zero with a precision of \(\alpha\). This push for the weights to be towards zero helps us avoid overfitting. The bigger the precision parameter \(\alpha\), the more the weights are pushed towards zero.
Graphical representation of the model#
Let’s visualize the regression model as a graph. Remember that the shaded nodes are assumed to be observed (so below, we assume that we know \(\alpha\) and \(\sigma\)). Another thing to observe is that the nodes inside the box are repeated as many times as indicated. Recall that we are using the plate notation for graphical models, and it saves from the trouble of drawing \(n\) input-output nodes.
Show code cell source
from graphviz import Digraph
g = Digraph('bayes_regression')
g.node('alpha', label='<α>', style='filled')
g.node('w', label='<<b>w</b>>')
g.node('sigma', label='<σ>', style='filled')
with g.subgraph(name='cluster_0') as sg:
sg.node('xj', label='<<b>x</b><sub>j</sub>>', style='filled')
sg.node('yj', label='<y<sub>j</sub>>', style='filled')
sg.attr(label='j=1,...,n')
sg.attr(labelloc='b')
g.edge('alpha', 'w')
g.edge('sigma', 'yj')
g.edge('w', 'yj')
g.edge('xj', 'yj')
g.render('bayes_regression', format='png')
g
The Posterior of the Weights#
Combining the likelihood and the prior, we get using Bayes’ rule:
The posterior summarizes our state of knowledge about \(\mathbf{w}\) after we see the data, if we know \(\alpha\) and \(\sigma\).
Maximum Posterior Estimate#
We can find a point estimate of \(\mathbf{w}\) by solving:
For Gaussian likelihood and weight prior, the logarithm of the posterior is:
Taking derivatives with respect to \(\mathbf{w}\) and setting them equal to zero (necessary condition), we find:
Unfortunately, we no longer have an analytic formula for \(\sigma\) (we will fix that later).
Examples#
See this example.
Probabilistic regression III (Bayesian linear regression)#
This type of regression has the same setup as version III of probabilistic regression, but we do not get a point estimate for the weights. We retain the posterior of the weights in its full complexity. The benefit is that we can now quantify the epistemic uncertainty induced by the limited number of observations used to estimate the weights.
For Gaussian likelihood and weight prior, the posterior of the weights is Gaussian:
where
and
The posterior will not be analytically available in the general case of non-Gaussian likelihood (and non-linear models). We will learn how to deal with these cases in Lectures 27 and 28 when discussing generic ways to characterize posteriors.
Posterior Predictive Distribution#
Using probability theory, we ask: What do we know about \(y\) at a new \(\mathbf{x}\) after seeing the data? To answer this question, we use the sum rule:
For the all-Gaussian case, this is analytically available:
where
Notice that the predictive uncertainty is:
where:
\(\sigma^2\) corresponds to the measurement noise.
\(\boldsymbol{\phi}(\mathbf{x})^T\mathbf{S}\boldsymbol{\phi}(\mathbf{x})\) is the epistemic uncertainty induced by limited data.
Examples#
See this example.