Variational Autoencoders 1: Overview
Variational Autoencoders 2: Maths
Variational Autoencoders 3: Training, Inference and comparison with other models
Recalling that the backbone of VAEs is the following equation:
In order to use gradient descent for the right hand side, we need a tractable way to compute it:
- The first part
is tricky, because that requires passing multiple samples of
through
in order to have a good approximation for the expectation (and this is expensive). However, we can just take one sample of
, then pass it through
and use it as an estimation for
. Eventually we are doing stochastic gradient descent over different sample
in the training set anyway.
- The second part
is even more tricky. By design, we fix
to be the standard normal distribution
(read part 1 to know why). Therefore, we need a way to parameterize
so that the KL divergence is tractable.
Here comes perhaps the most important approximation of VAEs. Since is standard Gaussian, it is convenient to have
also Gaussian. One popular way to parameterize
is to make it also Gaussian with mean
and diagonal covariance
, i.e.
, where
and
are two vectors computed by a neural network. This is the original formulation of VAEs in section 3 of this paper.
This parameterization is preferred because the KL divergence now becomes closed-form:
Although this looks like magic, but it is quite natural if you apply the definition of KL divergence on two normal distributions. Doing so will teach you a bit of calculus.
So we have all the ingredients. We use a feedforward net to predict and
given an input sample
draw from the training set. With those vectors, we can compute the KL divergence and
, which, in term of optimization, will translate into something similar to
.
It is worth to pause here for a moment and see what we just did. Basically we used a constrained Gaussian (with diagonal covariance matrix) to parameterize . Moreover, by using
for one of the training criteria, we implicitly assume
to be also Gaussian. So although the maths that lead to VAEs are generic and beautiful, at the end of the day, to make things tractable, we ended up using those severe approximations. Whether those approximations are good enough totally depend on practical applications.
There is an important detail though. Once we have and
from the encoder, we will need to sample
from a Gaussian distribution parameterized by those vectors.
is needed for the decoder to reconstruct
, which will then be optimized to be as close to
as possible via gradient descent. Unfortunately, the “sample” step is not differentiable, therefore we will need a trick call reparameterization, where we don’t sample
directly from
, but first sample
from
, and then compute
. This will make the whole computation differentiable and we can apply gradient descent as usual.
The cool thing is during inference, you won’t need the encoder to compute and
at all! Remember that during training, we try to pull
to be close to
(which is standard normal), so during inference, we can just inject
directly into the decoder and get a sample of
. This is how we can leverage the power of “generation” from VAEs.
There are various extensions to VAEs like Conditional VAEs and so on, but once you understand the basic, everything else is just nuts and bolts.
To sum up the series, this is the conceptual graph of VAEs during training, compared to some other models. Of course there are many details in those graphs that are left out, but you should get a rough idea about how they work.
In the case of VAEs, I added the additional cost term in blue to highlight it. The cost term for other models, except GANs, are the usual L2 norm .
GSN is an extension to Denoising Autoencoder with explicit hidden variables, however that requires to form a fairly complicated Markov Chain. We may have another post for it.
With this diagram, hopefully you will see how lame GAN is. It is even simpler than the humble RBM. However, the simplicity of GANs makes it so powerful, while the complexity of VAE makes it quite an effort just to understand. Moreover, VAEs make quite a few severe approximation, which might explain why samples generated from VAEs are far less realistic than those from GANs.
That’s quite enough for now. Next time we will switch to another topic I’ve been looking into recently.
