Skip to content

[bug fix] fixes #6444 - checkpointing save issue in advanced dreambooth lora sdxl script #6464

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Jan 5, 2024

Conversation

linoytsaban
Copy link
Collaborator

@linoytsaban linoytsaban commented Jan 5, 2024

Fixes the bug described in #6444 that occurs when both
(--checkpointing_steps < --max_train_steps) and --train_text_encoder_ti is enabled.

When enabling pivotal tuning, since we optimize the embeddings but we don't modify the text encoder weights, we shouldn't be saving text encoder lora layers in each checkpoint (which is what cause the error). This PR modifies save_model_hook such that they're saved only when full text encoder fine tuning is performed

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@linoytsaban
Copy link
Collaborator Author

cc @sayakpaul for review 😊

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the quick resolve!

…l_advanced.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
@sayakpaul
Copy link
Member

Feel free to merge once the CI is green!

@sayakpaul sayakpaul merged commit 2fada8d into huggingface:main Jan 5, 2024
@linoytsaban linoytsaban deleted the advanced_script_peft branch February 8, 2024 17:07
AmericanPresidentJimmyCarter pushed a commit to AmericanPresidentJimmyCarter/diffusers that referenced this pull request Apr 26, 2024
…ed dreambooth lora sdxl script (huggingface#6464)

* unwrap text encoder when saving hook only for full text encoder tuning

* unwrap text encoder when saving hook only for full text encoder tuning

* save embeddings in each checkpoint as well

* save embeddings in each checkpoint as well

* save embeddings in each checkpoint as well

* Update examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants