Path: blob/main/examples/unconditional_image_generation/README.md
1448 views
Training examples
Creating a training image set is described in a different document.
Installing the dependencies
Before running the scripts, make sure to install the library's training dependencies:
Important
To make sure you can successfully run the latest versions of the example scripts, we highly recommend installing from source and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
Then cd in the example folder and run
And initialize an 🤗Accelerate environment with:
Unconditional Flowers
The command to train a DDPM UNet model on the Oxford Flowers dataset:
An example trained model: https://huggingface.co/anton-l/ddpm-ema-flowers-64
A full training run takes 2 hours on 4xV100 GPUs.
Unconditional Pokemon
The command to train a DDPM UNet model on the Pokemon dataset:
An example trained model: https://huggingface.co/anton-l/ddpm-ema-pokemon-64
A full training run takes 2 hours on 4xV100 GPUs.
Using your own data
To use your own dataset, there are 2 ways:
you can either provide your own folder as
--train_data_dir
or you can upload your dataset to the hub (possibly as a private repo, if you prefer so), and simply pass the
--dataset_name
argument.
Below, we explain both in more detail.
Provide the dataset as a folder
If you provide your own folders with images, the script expects the following directory structure:
In other words, the script will take care of gathering all images inside the folder. You can then run the script like this:
Internally, the script will use the ImageFolder
feature which will automatically turn the folders into 🤗 Dataset objects.
Upload your data to the hub, as a (possibly private) repo
It's very easy (and convenient) to upload your image dataset to the hub using the ImageFolder
feature available in 🤗 Datasets. Simply do the following:
ImageFolder
will create an image
column containing the PIL-encoded images.
Next, push it to the hub!
and that's it! You can now train your model by simply setting the --dataset_name
argument to the name of your dataset on the hub.
More on this can also be found in this blog post.