You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components.
29
+
Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU.
30
+
You can move the generator object to GPU, just like you would in PyTorch.
31
+
32
+
```python
33
+
>>> generator.to("cuda")
22
34
```
23
35
24
-
##Mainclasses
36
+
Now you can use the `generator` on your text prompt:
25
37
26
-
###Models
38
+
```python
39
+
>>> image = generator("An image of a squirrel in Picasso style").images[0]
40
+
```
41
+
42
+
The output is by default wrapped into a [PIL Image object](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class).
The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components.
29
+
Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU.
30
+
You can move the generator object to GPU, just like you would in PyTorch.
31
+
32
+
```python
33
+
>>> generator.to("cuda")
20
34
```
21
-
pipinstalldiffusers
35
+
36
+
Now you can use the `generator` on your text prompt:
37
+
38
+
```python
39
+
>>> image = generator().images[0]
22
40
```
23
41
24
-
##Mainclasses
42
+
The output is by default wrapped into a [PIL Image object](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class).
0 commit comments