diff --git a/examples/dreambooth/train_dreambooth_lora_sd3.py b/examples/dreambooth/train_dreambooth_lora_sd3.py index 2c66c341f78f..fe2720047198 100644 --- a/examples/dreambooth/train_dreambooth_lora_sd3.py +++ b/examples/dreambooth/train_dreambooth_lora_sd3.py @@ -101,19 +101,37 @@ def save_model_card( ## Model description -These are {repo_id} DreamBooth weights for {base_model}. +These are {repo_id} DreamBooth LoRA weights for {base_model}. -The weights were trained using [DreamBooth](https://dreambooth.github.io/). +The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md). -LoRA for the text encoder was enabled: {train_text_encoder}. +Was LoRA for the text encoder enabled? {train_text_encoder}. ## Trigger words -You should use {instance_prompt} to trigger the image generation. +You should use `{instance_prompt}` to trigger the image generation. ## Download model -[Download]({repo_id}/tree/main) them in the Files & versions tab. +[Download the *.safetensors LoRA]({repo_id}/tree/main) in the Files & versions tab. + +## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) + +```py +from diffusers import AutoPipelineForText2Image +import torch +pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-3-medium-diffusers', torch_dtype=torch.float16).to('cuda') +pipeline.load_lora_weights('{repo_id}', weight_name='pytorch_lora_weights.safetensors') +image = pipeline('{validation_prompt if validation_prompt else instance_prompt}').images[0] +``` + +### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke + +- **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/{repo_id}/blob/main/diffusers_lora_weights.safetensors)**. + - Rename it and place it on your `models/Lora` folder. + - On AUTOMATIC1111, load the LoRA by adding `` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). + +For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License diff --git a/examples/dreambooth/train_dreambooth_sd3.py b/examples/dreambooth/train_dreambooth_sd3.py index c8f2fb1ac61b..9a72294c20bd 100644 --- a/examples/dreambooth/train_dreambooth_sd3.py +++ b/examples/dreambooth/train_dreambooth_sd3.py @@ -95,17 +95,22 @@ def save_model_card( These are {repo_id} DreamBooth weights for {base_model}. -The weights were trained using [DreamBooth](https://dreambooth.github.io/). +The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md). -Text encoder was fine-tuned: {train_text_encoder}. +Was the text encoder fine-tuned? {train_text_encoder}. ## Trigger words -You should use {instance_prompt} to trigger the image generation. +You should use `{instance_prompt}` to trigger the image generation. -## Download model +## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) -[Download]({repo_id}/tree/main) them in the Files & versions tab. +```py +from diffusers import AutoPipelineForText2Image +import torch +pipeline = AutoPipelineForText2Image.from_pretrained('{repo_id}', torch_dtype=torch.float16).to('cuda') +image = pipeline('{validation_prompt if validation_prompt else instance_prompt}').images[0] +``` ## License