Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/main/diffusers/image_2_image_using_diffusers.ipynb
Views: 2535
Image2Image Pipeline for Stable Diffusion using 🧨 Diffusers
This notebook shows how to create a custom diffusers
pipeline for text-guided image-to-image generation with Stable Diffusion model using 🤗 Hugging Face 🧨 Diffusers library.
For a general introduction to the Stable Diffusion model please refer to this colab.
To use private and gated models on 🤗 Hugging Face Hub, login is required. If you are only using a public checkpoint (such as CompVis/stable-diffusion-v1-4
in this notebook), you can skip this step.
Login successful
Your token has been saved to /root/.huggingface/token
Authenticated through git-credential store but this isn't the helper defined on your machine.
You might have to re-authenticate when pushing to the Hugging Face Hub. Run the following command in your terminal in case you want to set this credential helper as the default
git config --global credential.helper store
Image2Image pipeline.
Load the pipeline
Download an initial image and preprocess it so we can pass it to the pipeline.
Define the prompt and run the pipeline.
Here, strength
is a value between 0.0 and 1.0, that controls the amount of noise that is added to the input image. Values that approach 1.0 allow for lots of variations but will also produce images that are not semantically consistent with the input.
As you can see, when using a lower value for strength
, the generated image is more closer to the original image
Now using LMSDiscreteScheduler