Skip to content

[Docs] Pipelines for inference #417

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Sep 8, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 26 additions & 8 deletions docs/source/using-diffusers/conditional_image_generation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,21 +12,39 @@ specific language governing permissions and limitations under the License.



# Quicktour
# Conditional Image Generation

Start using Diffusers🧨 quickly!
To start, use the [`DiffusionPipeline`] for quick inference and sample generations!
The [`DiffusionPipeline`] is the easiest way to use a pre-trained diffusion system for inference

Start by creating an instance of [`DiffusionPipeline`] and specify which pipeline checkpoint you would like to download.
You can use the [`DiffusionPipeline`] for any [Diffusers' checkpoint](https://huggingface.co/models?library=diffusers&sort=downloads).
In this guide though, you'll use [`DiffusionPipeline`] for text-to-image generation with [Latent Diffusion](https://huggingface.co/CompVis/ldm-text2im-large-256):

```python
>>> from diffusers import DiffusionPipeline

>>> generator = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256")
```
pip install diffusers
The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components.
Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU.
You can move the generator object to GPU, just like you would in PyTorch.

```python
>>> generator.to("cuda")
```

## Main classes
Now you can use the `generator` on your text prompt:

### Models
```python
>>> image = generator("An image of a squirrel in Picasso style").images[0]
```

The output is by default wrapped into a [PIL Image object](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class).

### Schedulers
You can save the image by simply calling:

### Pipeliens
```python
>>> image.save("image_of_squirrel_painting.png")
```


36 changes: 28 additions & 8 deletions docs/source/using-diffusers/unconditional_image_generation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,21 +12,41 @@ specific language governing permissions and limitations under the License.



# Quicktour
# Unonditional Image Generation

Start using Diffusers🧨 quickly!
To start, use the [`DiffusionPipeline`] for quick inference and sample generations!
The [`DiffusionPipeline`] is the easiest way to use a pre-trained diffusion system for inference

Start by creating an instance of [`DiffusionPipeline`] and specify which pipeline checkpoint you would like to download.
You can use the [`DiffusionPipeline`] for any [Diffusers' checkpoint](https://huggingface.co/models?library=diffusers&sort=downloads).
In this guide though, you'll use [`DiffusionPipeline`] for unconditional image generation with [DDPM](https://arxiv.org/abs/2006.11239):

```python
>>> from diffusers import DiffusionPipeline

>>> generator = DiffusionPipeline.from_pretrained("google/ddpm-celebahq-256")
```
The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components.
Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU.
You can move the generator object to GPU, just like you would in PyTorch.

```python
>>> generator.to("cuda")
```
pip install diffusers

Now you can use the `generator` on your text prompt:

```python
>>> image = generator().images[0]
```

## Main classes
The output is by default wrapped into a [PIL Image object](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class).

### Models
You can save the image by simply calling:

```python
>>> image.save("generated_image.png")
```

### Schedulers

### Pipeliens