PyProbML VAE zoo 🐘
Author: Ming Liang Ang. Summer 2021.
Compare results of different VAEs :
VAE tricks and what the different VAE try to address :
A collection of Variational AutoEncoders (VAEs) implemented in pytorch with focus on reproducibility and creating reusable blocks that can be used in any project. The aim of this project is to provide a quick and simple working example for many of the cool VAE idea in the textbook. All the models are trained on the CelebA dataset for consistency and comparison.
Requirements
Python >= 3.7
PyTorch >= 1.8
Pytorch Lightning == 1.4.0
CUDA enabled computing device
To download this subdirectory only
Copy the url of the subdirectory and past it to this webstie and then download this subdirectory as a zipfile
Instruction For Training The Model
Download celeba data
Important : Make sure to get your kaggle.json from these instructions then run
to copy kaggle.json into a folder first. Then to download the data first donwload the following script
and run the following script
To Train Model
Results
Model | Paper | Reconstruction | Samples |
---|---|---|---|
Original Images (for reconstruction) | N/A | ![]() | N/A |
AE (Code, Config) | N/A | ![]() | ![]() |
VAE (Code, Config) | Link | ![]() | ![]() |
beta-VAE (Code, Config) | Link | ![]() | ![]() |
Hinge VAE (Code, Config) | Link | ![]() | ![]() |
MMD VAE (Code, Config) | Link | ![]() | ![]() |
Info VAE (Code, Config) | Link | ![]() | ![]() |
LogCosh VAE (Code, Config) | Link | ![]() | ![]() |
Two-stage VAE (Code, Config) | Link | ![]() | ![]() |
Sigma VAE (Code, Config) | Link | ![]() | ![]() |
VQ-VAE (K = 512, D = 64) (Code, Config) + PixelCNN(Code) | Link | ![]() | ![]() |
Acknowledgement
The idea of this zoo and some of the scripts were based on Anand Krishnamoorthy Pytorch-VAE library, we also used the script from sayantanauddy to transform and download the celeba from kaggle.