Our March Deep Learning Workshop is fast approaching. If you’ve been VAEing up over whether or not to attend, maybe the prospect of some VAEry exciting new content will tip you over the edge.
Terrifying aren’t they. You too will be able to generate terrifying emojis by traversing the emoji latent space you discover with your very own Variational Autoencoder. VAEs are a flavor of unsupervised generative neural network. However, unlike the Autoendoer a VAE allows you to interpolate between points in latent space and produce meaningful reconstructions! This makes it particularly interesting for feature extraction. What you see above is the result of traversing from somewhere near worried-face to face-screaming-in-fear on a 2d latent emoji plane.
During training the VAE takes an image of an emoji and squashes it through a bottle neck of just 2 elements. It then uses these 2 elements to reconstruct the input emoji as best it can.
But wait a minuet. Cool ya jets Josh. Isn’t that just an Autoendoer? Well done you beautiful beautiful person, it sure is! On top of trying to reconstruct the input a VAE also tries to give some structure to the latent space it discovers by shoehorning (technical term) it into a distribution it can sample from.
If you want to get a head-start check out these links:
- Introduces the idea of VAE (Heads-up! Its pretty dense.)
- A great blog post on VAE.
- This is an excelent implementation of a VAE using the MNIST dataset.
We’ll leave you with a 2d representation of the VAEmoji latent space. This plot shows the point in the latent space that is most excited by each emoji. Can you see anything interesting, do you have any ideas for where this type of feature extraction would be useful, maybe you can do better…(hint: I think you can 😉 )