-
Notifications
You must be signed in to change notification settings - Fork 6.1k
handle lora scale and clip skip in lpw sd and sdxl community pipelines #8988
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@@ -11,15 +11,19 @@ | |||
from diffusers import DiffusionPipeline | |||
from diffusers.configuration_utils import FrozenDict | |||
from diffusers.image_processor import VaeImageProcessor | |||
from diffusers.loaders import FromSingleFileMixin, StableDiffusionLoraLoaderMixin, TextualInversionLoaderMixin | |||
from diffusers.loaders import FromSingleFileMixin, LoraLoaderMixin, TextualInversionLoaderMixin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any reason why we're not using the newer StableDiffusionLoraLoaderMixin here?
@@ -268,6 +292,16 @@ def get_weighted_text_embeddings( | |||
skip_weighting (`bool`, *optional*, defaults to `False`): | |||
Skip the weighting. When the parsing is skipped, it is forced True. | |||
""" | |||
# set lora scale so that monkey patched LoRA | |||
# function of text encoder can correctly access it | |||
if lora_scale is not None and isinstance(pipe, LoraLoaderMixin): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We recently shipped #8981. I think this should be StableDiffusionLoraLoaderMixin. Maybe @sayakpaul can provide more clarity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LoraLoaderMixin
is fine but it's just deprecated. Prefer using StableDiffusionLoraLoaderMixin
as @a-r-r-o-w mentioned.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
awesome, thanks! i think some changes were mistakenly removed here from #8981. could you revert them?
What? |
I don't mean for you to revert 8981 😨 I thought the author of current PR removed your changes by mistake |
@a-r-r-o-w yes, i removed those changes by mistake, it should be fixed now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good to me! @sayakpaul could take a final look
# dynamically adjust the LoRA scale | ||
if not USE_PEFT_BACKEND: | ||
adjust_lora_scale_text_encoder(pipe.text_encoder, lora_scale) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need this? Because without the PEFT backend, you cannot really do LoRA inference in the recent diffusers versions. No strong opinions either.
if pipe.text_encoder_2 is not None: | ||
if not USE_PEFT_BACKEND: | ||
adjust_lora_scale_text_encoder(pipe.text_encoder_2, lora_scale) | ||
else: | ||
scale_lora_layers(pipe.text_encoder_2, lora_scale) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just copied these lines from pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py, should i just leave scale_lora_layers(pipe.text_encoder_2, lora_scale) ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh then it's okay.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Just left two minor comments.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
@noskill let's fix the code quality issues and then we can merge. |
Thank you for your contributions! |
#8988) * handle lora scale and clip skip in lpw sd and sdxl * use StableDiffusionLoraLoaderMixin * use StableDiffusionXLLoraLoaderMixin * style --------- Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
#8988) * handle lora scale and clip skip in lpw sd and sdxl * use StableDiffusionLoraLoaderMixin * use StableDiffusionXLLoraLoaderMixin * style --------- Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
What does this PR do?
handle lora scale in cross_attention_kwargs and clip skip