Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lora compatibility #111

Open
edicam opened this issue Sep 11, 2024 · 10 comments
Open

Lora compatibility #111

edicam opened this issue Sep 11, 2024 · 10 comments

Comments

@edicam
Copy link

edicam commented Sep 11, 2024

Sorry, have you tried using the created lora file in other applications ?
Because it can't be loaded by ComfyUI or Forge, it can't be converted using sd-scripts or ai-toolkit scripts ...
I know it works on the jupyter notebook, but the network used there is the one that is generated while training, it is not loaded from the new created file.

I have a new lora file, but I can't use it anywhere :)

Thanks for your precious work.

eta: the lora file was saved as a .pt file

@rohitgandikota
Copy link
Owner

Hey! Thanks for raising this very important concern

Can you tell me if using peft library will fix this issue of compatibility?

I am currently a little occupied with some other projects, but I can find some time to do a sliders with peft implementation code release in the next few weeks.

@edicam
Copy link
Author

edicam commented Sep 11, 2024

Tried to convert the created lora to PEFT using "convert_lora_to_peft_format.py" script from ai-toolkit , that is one of the most used solutions to train Flux's loras.

When trying to convert a lora saved as .safetensor :

(venv) H:\ai-toolkit>python scripts\convert_lora_to_peft_format.py E:\FLUX\fluxTest\slider_0.safetensors E:\FLUX\fluxTest\slider_0_1.safetensors Traceback (most recent call last): File "H:\ai-toolkit\scripts\convert_lora_to_peft_format.py", line 49, in <module> raise ValueError(f'Could not find rank in state dict') ValueError: Could not find rank in state dict

When trying to convert a lora saved as .pt :

(venv) H:\ai-toolkit>python scripts\convert_lora_to_peft_format.py E:\FLUX\fluxTest\slider_0.pt E:\FLUX\fluxTest\slider_0_1.safetensors Traceback (most recent call last): File "H:\ai-toolkit\scripts\convert_lora_to_peft_format.py", line 27, in <module> state_dict = load_file(args.input_path) File "H:\ai-toolkit\venv\lib\site-packages\safetensors\torch.py", line 313, in load_file with safe_open(filename, framework="pt", device=device) as f: safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

Must say that I'm not a specialist, but usually the message "HeaderTooLarge" points to a bad formatted file.

You have all time you need, I'm not here to pressure, I'm here to help on testing your solution.

And thanks again.

@sanguivore-easyco
Copy link

sanguivore-easyco commented Sep 16, 2024

#80 (comment) - looks like you can just update the line where the file is saved

Although when I try to use said safetensors in ComfyUI, it seems to do nothing and I get a bunch of this in the logs

[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_add_v_proj.alpha
[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_add_v_proj.lora_down.weight
[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_add_v_proj.lora_up.weight
[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_to_add_out.alpha
[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_to_add_out.lora_down.weight
[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_to_add_out.lora_up.weight
[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_to_k.alpha
[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_to_k.lora_down.weight
[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_to_k.lora_up.weight
[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_to_out_0.alpha
[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_to_out_0.lora_down.weight
[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_to_out_0.lora_up.weight
[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_to_q.alpha
[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_to_q.lora_down.weight
[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_to_q.lora_up.weight
[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_to_v.alpha
[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_to_v.lora_down.weight
[2024-09-16 23:35] lora key not loaded: lora_unet_transformer_blocks_9_attn_to_v.lora_up.weight

@edicam
Copy link
Author

edicam commented Sep 18, 2024

#80 (comment) - looks like you can just update the line where the file is saved

Although when I try to use said safetensors in ComfyUI, it seems to do nothing and I get a bunch of this in the logs

You can save it as safetensors or pt ... it will not work.

@sanguivore-easyco
Copy link

#80 (comment) - looks like you can just update the line where the file is saved
Although when I try to use said safetensors in ComfyUI, it seems to do nothing and I get a bunch of this in the logs

You can save it as safetensors or pt ... it will not work.

This fork supposedly contains the training script for some flux slider LoRAs that appear to work (at least as they've been uploaded to civitai) https://github.com/ntc-ai/sliders-conceptmod/blob/main/conceptmod/textsliders/train_lora_flux.py but I was having trouble getting it to run yesterday.

@edicam
Copy link
Author

edicam commented Sep 18, 2024

This fork supposedly contains the training script for some flux slider LoRAs that appear to work (at least as they've been uploaded to civitai) https://github.com/ntc-ai/sliders-conceptmod/blob/main/conceptmod/textsliders/train_lora_flux.py but I was having trouble getting it to run yesterday.

Sorry, this fork (conceptmod) always produce an error, it's not possible to create loras with it. The author must use another unpublished version to create the sliders he published.

@sanguivore-easyco
Copy link

sanguivore-easyco commented Sep 18, 2024

This fork supposedly contains the training script for some flux slider LoRAs that appear to work (at least as they've been uploaded to civitai) https://github.com/ntc-ai/sliders-conceptmod/blob/main/conceptmod/textsliders/train_lora_flux.py but I was having trouble getting it to run yesterday.

Sorry, this fork (conceptmod) always produce an error, it's not possible to create loras with it. The author must use another unpublished version to create the sliders he published.

Upgrading all deps and adding this before the conceptmod imports seems to have allowed it to progress to the training loop (haven't run it to completion yet)

import sys
sys.path.append(str(Path(__file__).resolve().parent.parent.parent))

Also needed to set the name_or_path in the config to https://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/flux1-schnell.safetensors

EDIT: at least the first file from 500 steps gets the HeaderTooLarge error :( and "Could not find rank in state dict" from the peft converter script.

@edicam
Copy link
Author

edicam commented Sep 21, 2024

EDIT: at least the first file from 500 steps gets the HeaderTooLarge error :( and "Could not find rank in state dict" from the peft converter script.

both scripts use the same routine to save the lora ... so the resulting files (.pt or .safetensors) produces exactly the same errors.
It's important to say that the lora works very well when used with the inference from the provided jupyter notebook, the question here is how the file is being saved, there is some issue on the way the file is saved.

@edicam
Copy link
Author

edicam commented Sep 24, 2024

So, after some studies, I was able to convert the .pt file that I mentioned on the first post to an usable .safetensors file.

https://gist.github.com/edicam/7d4974e81aa6970fa97ba0f17a2d2e3d

The resulting file was tested on ComfyUI and works like a charm, the slider is very good !

The script above is based on https://github.com/ostris/ai-toolkit/blob/main/scripts/convert_lora_to_peft_format.py. I'm not an AI specialist, so I don't care if it's not optimized or if it should have a better syntax... it works, it's fast and this is most important to me.

Thanks again to @rohitgandikota for this great repository !

@sanguivore-easyco
Copy link

So, after some studies, I was able to convert the .pt file that I mentioned on the first post to an usable .safetensors file.

https://gist.github.com/edicam/7d4974e81aa6970fa97ba0f17a2d2e3d

The resulting file was tested on ComfyUI and works like a charm, the slider is very good !

The script above is based on https://github.com/ostris/ai-toolkit/blob/main/scripts/convert_lora_to_peft_format.py. I'm not an AI specialist, so I don't care if it's not optimized or if it should have a better syntax... it works, it's fast and this is most important to me.

Thanks again to @rohitgandikota for this great repository !

I was able to make your gist work for the conceptmod ones as well with minor edits https://gist.github.com/sanguivore-easyco/e1757f2ecfb0e352f2eefa22ed3a8259 (in case anyone else ends up here looking)

Thanks @edicam and @rohitgandikota

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants