In this section, we will look at how we can infer the latent variables by exploiting the observed variables. We are assuming that we cannot directly measure the latent variable and instead we can only measure the observations. Following the notation throughout this book, the latent variables are and the observed variables are .
Our goal is to perform inference, i.e. we are interested in estimating some hidden state given some observations. We can define the posterior using Bayes rule
where the numerator is the joint distribution for the latent variable and the observation and the denominator is the marginal likelihood or evidence for the observations. The joint distribution can be easy to estimate because we can generally factor this quantity using conditional distributions, i.e.
However the marginal lieklihood needs to be calculated by integrating out the latent variables.
This integral is generally intractable because it would mean integrating out all possible latent variables which we don't have access to. So we need to use alternative methods to try and estimate the posterior.
We can introduce a variational distribution from a family of possible distributions whereby we pick the best candidate that fits the true posterior . In general, we want a distribution that is easy to calculate, e.g. Gaussian, Bernoulli, etc, so that we can exploit conjugacy for calculating quantities within the loss function. We could also employ a parameterized variational distribution which we would need to find given the observations, i.e. .
To measure the similarity between our approximate posterior and the true posterior, we will use an asymmetric distance metric called the Kullback-Leibler (KL) divergence. This is given by
In our case, we would like to find the best candidate distribution st it minimizes the KL divergence
In the above equation, we don't have access to the true posterior so we will use the Bayes rules for the posterior (1) that we outlined earlier. We can plug the RHS of this equation into our KLD minimization problem to get
The first term is the variational distribution, the middle term is the joint distribution and the right term is the intractable marginal likelihood (3) that we referenced earlier.
Let's look at the KLD measure again with the posterior.
First, we will use log rules to expand the ratio
Now, let's plug in the RHS of Bayes posterior outlined in (1).
Again, we use log rules to expand this term
Now, we isolate out the marginal likelihood term (3) from the rest of the equation and we get
We can remove the expectation on the rightmost term because there is no dependency on the latent variable.
Looking at (6), we can rearrange this equation to isolate the expectation on the LHS of the equation. This gives us
which is known as the evidence lower bound (ELBO). This implies that we can maximize the quantity on the RHS of the equation which implies that we are simultaneously i) maximizing the evidence and ii) minimizing the KLD between our variational distribution and the true posterior.
3 Perspectives of the ELBO¶
There are three main ways to look at the ELBO depending upon the literature and application. The first one is the likelihood perspective, the second one is the flow perspective, and the last one is the variational free energy perspective. In all three cases, we first need to unpack the ELBO by expanding the joint distribution via Bayes rule outlined in (2). This gives us
Below, we outline each of the perspectives.
Data Fidelity + Prior¶
If we group the prior term and the variational distribution together, we get
The first term is the reconstruction loss which measures the expectation of likelihood wrt the variational distribution. The second term is the KL-Divergence between the prior and the variational distribution. This formulation is commonly found with Latent Variable models (LVMs) and Variational Autoencoders (VAEs) [Kingma & Welling, 2013].
Volume Correction¶
This perspective is more in line with the idea of using transform distributions. If we group the variational distribution and the likelihood term, we get
The first term is the reparameterized probability via the expectation in the transform distribution. The second term is the volume correction factor or the likelihood contribution. This formulation was (re-)introduced for the SurVae Flows paper [Nielsen et al., 2020] where they showcased generalized flows with bijective, surjective, and stochastic transformations.
Variational Free Energy¶
Lastly, we have the Variational Free Energy (VFE) formulation which is a very common way to motivate this using Free energy principles which is in part motivated by the Gibbs inequality. If we group the prior and the likelihood term, we get
The first term is the energy function which is the variational expectation over the population loss or joint distribution. The second term is the entropy of the variational distribution which acts as a regularization on the overall complexity of the distribution. This formulation is common in the Bayesian Learning Rule (BLR) literature [Khan & Rue (2021)Kıral et al. (2023)] as well as the sparse Gaussian process [Bauer et al., 2016].
Variational Distribution¶
We defined the variationa distribution as . However, we have many types of variational distributions we can impose. For example, we have some of the following:
- Delta,
- Gaussian,
- Laplacian, $$
- Mixture Distribution,
- Bijective Transform (Flow),
- Stochastic Transform (Encoder, Amortized),
- Conditional,
Below we will go through each of them and outline some potential strengths and weaknesses of each of the methods.
Delta Distribution¶
This is probably the distribution with the least amount of parameters. We set the covariance matrix to , i.e. , and we let all of the mass rest on mean points, .
Note: Although this is the most trivial variational distribution, it is the most widely used in optimization algorithms because it is equivalent to the MAP estimation (or MLE without any prior) as shown in [Wang & Blei, 2012].
Simple, ¶
This is the simplest case where we often assume a very simple distribution can describe the distribution.
If we take each of the Gaussian parameters as full matrices, we end up with:
For very high dimensional problems, these are a lot of parameters to learn. Now, we can have various simplifications (or complications) with this. For example, we can simplify the mean, , to be zero. The majority of the changes will come from the covariance. Here are a few modifications.
Full Covariance
This is when we parameterize our covariance to be a full covariance matrix. . This is easily the most expensive and the most complex of the Gaussian types.
Lower Cholesky
We can also parameterize our covariance to be a lower triangular matrix, i.e. , that satisfies the cholesky decomposition, i.e. . This reduces the number of parameters of the full covariance by a factor. It also has desireable properties when parameterizing covariance matrices that are computationally attractive, e.g. positive definite.
Diagonal Covariance
We can parameterize our covariance matrix to be a diagonal, i.e. . This is a very drastic simplification of our model which limits the expressivity. However, there are immense computational benefits For example, a d-dimensional multivariate Gaussian rv with a mean and a diagonal covariance is the same as the product of univeriate Gaussians.
This is also known as the mean-field approximation and it is a very common starting point in practical VI algorithms.
Low Rank Multivariate Normal
Another parameterization is a low rank matrix with a diagonal matrix, i.e. where . We assume that our parameterization can be low dimensional which might be appropriate for some applications. This allows for some computationally efficient schemes that make use of the Woodbury Identity and the matrix determinant lemma.
Orthogonal Decoupled
One interesting approach is to map the variational parameters via a subspace parameterization [Salimbeni et al., 2018]. For example, we can define the mean and variance like so:
This is a bit of a spin off of the Low-Rank Multivariate Normal approach. However, this method takes care and provides a low-rank method for both the mean and the covariance. They argue that we would be able to put more computational effort in the mean function (computationally easy) and less computational effort for the covariance (computationally intensive).
Laplace Approximation¶
where:
This method was popularized by [Kass et al. (1991)MacKay (1992)]
Mixture Distribution¶
The principal behind this is that a simple base distribution, e.g. Gaussian, is not expressive enough. However, a mixture of simple distributions, e.g. Mixture of Gaussians, will be more expressive. So the idea is to choose simple base distribution and replicate it times. Then, we then do a normalized weighted summation of each component to produce our mixture distribution.
where and . For example, we can use a Gaussian distribution
where are potentially learned parameters.. And the mixture distribution will be
Again, we are free to parameterize the covariances as flexible or restrictive as possible. For example we can have full, cholesky, low-rank or diagonal. In addition we can tie some of the parameters together. For example, we can have the same covariance matrix for every component, e.g. . Even for VAEs, this becomes a prior distribution which has noticable improvement over the standard Gaussian prior.
Note: in principal, a mixture distribution is very powerful and has the ability to estimate any distribution, e.g. univariate with enough components. However, like with most problems, the issue is estimating the best parameters just from observations.
Reparameterized¶
Gaussian¶
Bijective Transformation (Flow)¶
It may be that the variational distribution, , is not sufficiently expressive enough even with the complex Gaussian parameterization and/or the mixture distribution. So another option is to use a bijective transformation to map the data from a simple base distribution, e.g. Gaussian, to a more complex distribution for our variational parameter, .
We hope that the resulting variational distribution, , acts a better approximation to the data. Because our transformation is bijective, we can
variational parameter, , to a simple base distribution st we ha
where is the determinant Jacobian of the transformation, .
Stochastic Transformation (Encoder, Amortization)¶
Another type of transformation is a stochastic transformation. This is given by . In this case, we assume some non-linear. For example, a Gaussian distribution with a parameterized mean and variance via neural networks
or more appropriately
It can be very difficult to try and have a variational distribution that is complicated enough to cover the whole posterior. So often, we use a variational distribution that is conditioned on the observations, i.e. . This is known as an encoder because we encode the observations to obey th
Non-Parametric¶
- Kernels & Stein
- Kingma, D. P., & Welling, M. (2013). Auto-Encoding Variational Bayes. arXiv. 10.48550/ARXIV.1312.6114
- Nielsen, D., Jaini, P., Hoogeboom, E., Winther, O., & Welling, M. (2020). SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows. arXiv. 10.48550/ARXIV.2007.02731
- Khan, M. E., & Rue, H. (2021). The Bayesian Learning Rule. arXiv. 10.48550/ARXIV.2107.04562
- Kıral, E. M., Möllenhoff, T., & Khan, M. E. (2023). The Lie-Group Bayesian Learning Rule. arXiv. 10.48550/ARXIV.2303.04397
- Bauer, M., van der Wilk, M., & Rasmussen, C. E. (2016). Understanding Probabilistic Sparse Gaussian Process Approximations. arXiv. 10.48550/ARXIV.1606.04820