CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
huggingface

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: huggingface/notebooks
Path: blob/main/diffusers_doc/en/unconditional_image_generation.ipynb
Views: 2542
Kernel: Unknown Kernel

Unconditional image generation

Unconditional image generation is a relatively straightforward task. The model only generates images - without any additional context like text or an image - resembling the training data it was trained on.

The DiffusionPipeline is the easiest way to use a pre-trained diffusion system for inference.

Start by creating an instance of DiffusionPipeline and specify which pipeline checkpoint you would like to download. You can use any of the 🧨 Diffusers checkpoints from the Hub (the checkpoint you'll use generates images of butterflies).

[removed]

💡 Want to train your own unconditional image generation model? Take a look at the training guide to learn how to generate your own images.

In this guide, you'll use DiffusionPipeline for unconditional image generation with DDPM:

from diffusers import DiffusionPipeline generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128")

The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on a GPU. You can move the generator object to a GPU, just like you would in PyTorch:

generator.to("cuda")

Now you can use the generator to generate an image:

image = generator().images[0]

The output is by default wrapped into a PIL.Image object.

You can save the image by calling:

image.save("generated_image.png")

Try out the Spaces below, and feel free to play around with the inference steps parameter to see how it affects the image quality!