From f9190471d012b6e47400ad37b583101c06526698 Mon Sep 17 00:00:00 2001 From: Satpal Singh Rathore Date: Thu, 8 Sep 2022 14:32:09 +0530 Subject: [PATCH 1/2] Update conditional_image_generation.mdx --- .../conditional_image_generation.mdx | 34 ++++++++++++++----- 1 file changed, 26 insertions(+), 8 deletions(-) diff --git a/docs/source/using-diffusers/conditional_image_generation.mdx b/docs/source/using-diffusers/conditional_image_generation.mdx index 044f3937b9bb..e3c5efcaa267 100644 --- a/docs/source/using-diffusers/conditional_image_generation.mdx +++ b/docs/source/using-diffusers/conditional_image_generation.mdx @@ -12,21 +12,39 @@ specific language governing permissions and limitations under the License. -# Quicktour +# Conditional Image Generation -Start using Diffusers🧨 quickly! -To start, use the [`DiffusionPipeline`] for quick inference and sample generations! +The [`DiffusionPipeline`] is the easiest way to use a pre-trained diffusion system for inference +Start by creating an instance of [`DiffusionPipeline`] and specify which pipeline checkpoint you would like to download. +You can use the [`DiffusionPipeline`] for any [Diffusers' checkpoint](https://huggingface.co/models?library=diffusers&sort=downloads). +In this guide though, you'll use [`DiffusionPipeline`] for text-to-image generation with [Latent Diffusion](https://huggingface.co/CompVis/ldm-text2im-large-256): + +```python +>>> from diffusers import DiffusionPipeline + +>>> generator = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") ``` -pip install diffusers +The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components. +Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU. +You can move the generator object to GPU, just like you would in PyTorch. + +```python +>>> generator.to("cuda") ``` -## Main classes +Now you can use the `generator` on your text prompt: -### Models +```python +>>> image = generator("An image of a squirrel in Picasso style").images[0] +``` + +The output is by default wrapped into a [PIL Image object](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class). -### Schedulers +You can save the image by simply calling: -### Pipeliens +```python +>>> image.save("image_of_squirrel_painting.png") +``` From a652c3d538d48086e3aeef010685435620bd9763 Mon Sep 17 00:00:00 2001 From: Satpal Singh Rathore Date: Thu, 8 Sep 2022 14:38:22 +0530 Subject: [PATCH 2/2] Update unconditional_image_generation.mdx --- .../unconditional_image_generation.mdx | 36 ++++++++++++++----- 1 file changed, 28 insertions(+), 8 deletions(-) diff --git a/docs/source/using-diffusers/unconditional_image_generation.mdx b/docs/source/using-diffusers/unconditional_image_generation.mdx index 044f3937b9bb..8f5449f8fbe5 100644 --- a/docs/source/using-diffusers/unconditional_image_generation.mdx +++ b/docs/source/using-diffusers/unconditional_image_generation.mdx @@ -12,21 +12,41 @@ specific language governing permissions and limitations under the License. -# Quicktour +# Unonditional Image Generation -Start using Diffusers🧨 quickly! -To start, use the [`DiffusionPipeline`] for quick inference and sample generations! +The [`DiffusionPipeline`] is the easiest way to use a pre-trained diffusion system for inference +Start by creating an instance of [`DiffusionPipeline`] and specify which pipeline checkpoint you would like to download. +You can use the [`DiffusionPipeline`] for any [Diffusers' checkpoint](https://huggingface.co/models?library=diffusers&sort=downloads). +In this guide though, you'll use [`DiffusionPipeline`] for unconditional image generation with [DDPM](https://arxiv.org/abs/2006.11239): + +```python +>>> from diffusers import DiffusionPipeline + +>>> generator = DiffusionPipeline.from_pretrained("google/ddpm-celebahq-256") +``` +The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components. +Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU. +You can move the generator object to GPU, just like you would in PyTorch. + +```python +>>> generator.to("cuda") ``` -pip install diffusers + +Now you can use the `generator` on your text prompt: + +```python +>>> image = generator().images[0] ``` -## Main classes +The output is by default wrapped into a [PIL Image object](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class). -### Models +You can save the image by simply calling: + +```python +>>> image.save("generated_image.png") +``` -### Schedulers -### Pipeliens