haofanwang / lora-for-diffusers Goto Github PK
View Code? Open in Web Editor NEWThe most easy-to-understand tutorial for using LoRA (Low-Rank Adaptation) within diffusers framework for AI Generation Researchers🔥
License: MIT License
The most easy-to-understand tutorial for using LoRA (Low-Rank Adaptation) within diffusers framework for AI Generation Researchers🔥
License: MIT License
Hi,
I am trying to convert lora from safetensor format to bin using the script in format_convert.py. The bin file was generated successfully, but it always throws HeaderTooLarge error when loading it. Could you please help? Thanks in advance!
Below is the script that gives the above error. Env: google colab.
# load diffusers model
model_id = "runwayml/stable-diffusion-v1-5"
pipeline = StableDiffusionPipeline.from_pretrained(model_id,torch_dtype=torch.float32)
# convert
# you have to download a suitable safetensors, not all is supported!
# download example from https://huggingface.co/SenY/LoRA/tree/main
# wget https://huggingface.co/SenY/LoRA/resolve/main/CheapCotton.safetensors
safetensor_path = "CheapCotton.safetensors"
bin_path = "CheapCotton.bin"
safetensors_to_bin(safetensor_path, bin_path)
# load it into UNet
# please note that diffusers' load_attn_procs only support add LoRA into attention
# if you have LoRA with other insertion, it does not support now
pipeline.unet.load_attn_procs(bin_path)
Hi again
the script worked fine on my dev machine, but when I moved it to production, exception happens:
line 242, in applyLora state_dict = load_file(lora['path']) File "/usr/local/lib/python3.8/dist-packages/safetensors/torch.py", line 98, in load_file with safe_open(filename, framework="pt", device=device) as f: Exception: Error while deserializing header: HeaderTooLarge
Do you know what might be the cause?
A "suitable lora" is mentioned in this conversion script
Lora-for-Diffusers/format_convert.py
Lines 154 to 161 in 18adfa4
Can you explain what that means?
thanks for your good work!Can this script convert lora satetensor of sdxl to diffusers?
!cd ./diffusers
checkpoint_path = "/content/drive/MyDrive/sd/stable-diffusion-webui/models/Stable-diffusion/absolutereality_v181.safetensors"
dump_path = "/content/drive/MyDrive/sd/stable-diffusion-webui/models/Converted-Model"
!python ./scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path "{checkpoint_path}" --dump_path "{dump_path}" --from_safetensors
this throws error:
Error: python3: can't open file '/content/./scripts/convert_original_stable_diffusion_to_diffusers.py': [Errno 2] No such file or directory
i am a beginner and don't know much about diffusers and all the other stuffs so please kindly help me.
pipeline = StableDiffusionPipeline.from_pretrained(save_dir,torch_dtype=torch.float32)
Traceback (most recent call last):
File "", line 1, in
NameError: name 'save_dir' is not defined
What should I do?
Hi there,
I got the following error when I tried to use the script provided.
TypeError: to() received an invalid combination of arguments - got (torch.dtype,
NoneType), but expected one of:
You can reproduce it with the following Colab link,
https://colab.research.google.com/drive/1BXOY6kjhU4qeRDfAM70GMDaqidqs-Jrg?usp=sharing
Any suggestions will be appreciated!
In diffusers lora documentation, you can adjust the merging ratio through cross_attention_kwargs
is there a way to do that with this safetensor approach?
If not, do you know how to convert lora safetensor to diffusers weights? I tried to use scripts that convert ckpt/safetensor to diffusers, none of them worked
Tried it already, but there's no point. Massive key mismatch.
Hi,
Hi, thanks a lot for your great work. I am converting LoRA file of safetensor format downloaded from civitai using your format_convert.py. Then I load the converted .bin file using pipe.unet.load_attn_procs. But I get following error:
RuntimeError: Errors in loading state_dict for LoRACrossAttnProcessor:
size mismatch for to_q_lora.down.weight: copying a param with shape torch.size(128,320) from checkpoint, the shape in current model is torch.size(4, 320).
It seems to be related to the config of unet's attn processor, but I could not find corresponding documents. Could you please provides some suggestions?
Hi @haofanwang thanks for the repo ❤
maybe this part should be like this
convert_name_to_safetensors
correct
lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_out_0.lora_down.weight
lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_out_0.lora_up.weight
wrong
lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_out.lora_down.weight
lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_out.lora_up.weight
Facing this error when running the script on every .safetensor file
Another question: convert_lora_safetensor_to_diffusers.py
converts safetensors to diffusers format. After I trained LoRA model, I have the following in the output folder and checkpoint subfolder:
How to convert them into safetensors like the ones I downloaded from civitai or huggingface so that I can use this via Automatica1111?
Thanks a lot!!
with stable-diffusion-v1-5, the default tokenizer only allows 77 tokens. are you having desired results with the prompt included in your wanostyle example? i'm seeing this warning and poor image results:
The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['_ offset : 1 >, closed shirt, anime screencap, scar under eye, ready to fight, black eyes']
Thank you so much for this work!! it helps me a lot.
By the way, have you considered trans the Lycoris models to the diffusers API?
When I tried to load the converted file using load_attn_procs in version 0.15.0, an error occurred. However, it worked fine when I downgraded to version 0.14.0. But even after conversion, the size of the resulting bin file is still less than half and get bad images with loading the inference bin results.(the speed of .bin inference will be very slow!!!!)
I run !python ./scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path /content/drive/MyDrive/sd/stable-diffusion-webui/models/Lora/100img_1-5.safetensors --dump_path /content/drive/MyDrive/Stable_diffusion/stable_diffusion_lora_weights/lora-only-100img_1-5 --from_safetensors
And get this error:
/usr/local/lib/python3.9/dist-packages/flax/core/frozen_dict.py:169: FutureWarning: jax.tree_util.register_keypaths is deprecated, and will be removed in a future release. Please use `register_pytree_with_keys()` instead.
jax.tree_util.register_keypaths(
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
0it [00:00, ?it/s]
2023-03-26 13:58:12.913301: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
2023-03-26 13:58:12.913397: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
2023-03-26 13:58:12.913418: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
global_step key not found in model
Traceback (most recent call last):
File "/content/diffusers/./scripts/convert_original_stable_diffusion_to_diffusers.py", line 128, in <module>
pipe = download_from_original_stable_diffusion_ckpt(
File "/usr/local/lib/python3.9/dist-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1157, in download_from_original_stable_diffusion_ckpt
converted_unet_checkpoint = convert_ldm_unet_checkpoint(
File "/usr/local/lib/python3.9/dist-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 379, in convert_ldm_unet_checkpoint
new_checkpoint["time_embedding.linear_1.weight"] = unet_state_dict["time_embed.0.weight"]
KeyError: 'time_embed.0.weight'
Last time it was some other error...
So im running this:
python3 /root/diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path /sdcard/Insertion.safetensors --dump_path /root/Insert --from_safetensors
And getting this:
Traceback (most recent call last): File "/root/diffusers/./scripts/convert_original_stable_diffusion_to_diffusers.py", line 154, in pipe = download_from_original_stable_diffusion_ckpt( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: download_from_original_stable_diffusion_ckpt() got an unexpected keyword argument 'checkpoint_path_or_dict'
Any idea why?
i just one to be able to load a lora while using SD on termux Proot
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.