Giter VIP home page Giter VIP logo

lora-for-diffusers's People

Contributors

erjanmx avatar haofanwang avatar lebowitz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lora-for-diffusers's Issues

SafetensorError: Error while deserializing header: HeaderTooLarge

Hi,

I am trying to convert lora from safetensor format to bin using the script in format_convert.py. The bin file was generated successfully, but it always throws HeaderTooLarge error when loading it. Could you please help? Thanks in advance!

image

Below is the script that gives the above error. Env: google colab.

# load diffusers model
model_id = "runwayml/stable-diffusion-v1-5"
pipeline = StableDiffusionPipeline.from_pretrained(model_id,torch_dtype=torch.float32)

# convert
# you have to download a suitable safetensors, not all is supported!
# download example from https://huggingface.co/SenY/LoRA/tree/main
# wget https://huggingface.co/SenY/LoRA/resolve/main/CheapCotton.safetensors
safetensor_path = "CheapCotton.safetensors"

bin_path = "CheapCotton.bin"
safetensors_to_bin(safetensor_path, bin_path)

# load it into UNet
# please note that diffusers' load_attn_procs only support add LoRA into attention
# if you have LoRA with other insertion, it does not support now
pipeline.unet.load_attn_procs(bin_path)

Exception: Error while deserializing header: HeaderTooLarge

Hi again

the script worked fine on my dev machine, but when I moved it to production, exception happens:

line 242, in applyLora state_dict = load_file(lora['path']) File "/usr/local/lib/python3.8/dist-packages/safetensors/torch.py", line 98, in load_file with safe_open(filename, framework="pt", device=device) as f: Exception: Error while deserializing header: HeaderTooLarge

Do you know what might be the cause?

"Suitable Lora" Conversion

A "suitable lora" is mentioned in this conversion script

# convert
# you have to download a suitable safetensors, not all is supported!
# download example from https://huggingface.co/SenY/LoRA/tree/main
# wget https://huggingface.co/SenY/LoRA/resolve/main/CheapCotton.safetensors
safetensor_path = "CheapCotton.safetensors"
bin_path = "CheapCotton.bin"
safetensors_to_bin(safetensor_path, bin_path)

Can you explain what that means?

lora of sdxl

thanks for your good work!Can this script convert lora satetensor of sdxl to diffusers?

can't open file '/content/./scripts/convert_original_stable_diffusion_to_diffusers.py': [Errno 2] No such file or directory

!cd ./diffusers

checkpoint_path = "/content/drive/MyDrive/sd/stable-diffusion-webui/models/Stable-diffusion/absolutereality_v181.safetensors"
dump_path = "/content/drive/MyDrive/sd/stable-diffusion-webui/models/Converted-Model"

!python ./scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path "{checkpoint_path}" --dump_path "{dump_path}" --from_safetensors

this throws error:
Error: python3: can't open file '/content/./scripts/convert_original_stable_diffusion_to_diffusers.py': [Errno 2] No such file or directory

i am a beginner and don't know much about diffusers and all the other stuffs so please kindly help me.

TypeError: to() received an invalid combination of arguments - got (torch.dtype, NoneType), but expected one of:

Hi there,

I got the following error when I tried to use the script provided.

TypeError: to() received an invalid combination of arguments - got (torch.dtype,
NoneType), but expected one of:

  • (torch.device device, torch.dtype dtype, bool non_blocking, bool copy, *,
    torch.memory_format memory_format)
  • (torch.dtype dtype, bool non_blocking, bool copy, *, torch.memory_format
    memory_format)
  • (Tensor tensor, bool non_blocking, bool copy, *, torch.memory_format
    memory_format)

You can reproduce it with the following Colab link,
https://colab.research.google.com/drive/1BXOY6kjhU4qeRDfAM70GMDaqidqs-Jrg?usp=sharing

Any suggestions will be appreciated!

Is there a way to adjust the merging ratio?

In diffusers lora documentation, you can adjust the merging ratio through cross_attention_kwargs

is there a way to do that with this safetensor approach?

If not, do you know how to convert lora safetensor to diffusers weights? I tried to use scripts that convert ckpt/safetensor to diffusers, none of them worked

size mismatch using the converted .bin file

Hi, thanks a lot for your great work. I am converting LoRA file of safetensor format downloaded from civitai using your format_convert.py. Then I load the converted .bin file using pipe.unet.load_attn_procs. But I get following error:

RuntimeError: Errors in loading state_dict for LoRACrossAttnProcessor:

size mismatch for to_q_lora.down.weight: copying a param with shape torch.size(128,320) from checkpoint, the shape in current model is torch.size(4, 320).

It seems to be related to the config of unet's attn processor, but I could not find corresponding documents. Could you please provides some suggestions?

to_out to to_out_0

Hi @haofanwang thanks for the repo ❤

maybe this part should be like this

convert_name_to_safetensors

correct

lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_out_0.lora_down.weight	
lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_out_0.lora_up.weight

wrong

lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_out.lora_down.weight
lora_unet_down_blocks_0_attentions_0_transformer_blocks_0_attn1_to_out.lora_up.weight

convert lora .bin weights to ckpt or safetensors

Another question: convert_lora_safetensor_to_diffusers.py converts safetensors to diffusers format. After I trained LoRA model, I have the following in the output folder and checkpoint subfolder:
Screenshot 2023-02-20 at 7 07 51 PM
Screenshot 2023-02-20 at 7 07 33 PM

How to convert them into safetensors like the ones I downloaded from civitai or huggingface so that I can use this via Automatica1111?

Thanks a lot!!

prompt example being truncated

with stable-diffusion-v1-5, the default tokenizer only allows 77 tokens. are you having desired results with the prompt included in your wanostyle example? i'm seeing this warning and poor image results:

The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['_ offset : 1 >, closed shirt, anime screencap, scar under eye, ready to fight, black eyes']

LyCoris model

Thank you so much for this work!! it helps me a lot.
By the way, have you considered trans the Lycoris models to the diffusers API?

It seems that the safetensors to bin converter is not working with diffuser 0.15.0.

When I tried to load the converted file using load_attn_procs in version 0.15.0, an error occurred. However, it worked fine when I downgraded to version 0.14.0. But even after conversion, the size of the resulting bin file is still less than half and get bad images with loading the inference bin results.(the speed of .bin inference will be very slow!!!!)

Error when running convert_original_stable_diffusion_to_diffusers

I run !python ./scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path /content/drive/MyDrive/sd/stable-diffusion-webui/models/Lora/100img_1-5.safetensors --dump_path /content/drive/MyDrive/Stable_diffusion/stable_diffusion_lora_weights/lora-only-100img_1-5 --from_safetensors

And get this error:

/usr/local/lib/python3.9/dist-packages/flax/core/frozen_dict.py:169: FutureWarning: jax.tree_util.register_keypaths is deprecated, and will be removed in a future release. Please use `register_pytree_with_keys()` instead.
  jax.tree_util.register_keypaths(
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
0it [00:00, ?it/s]
2023-03-26 13:58:12.913301: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
2023-03-26 13:58:12.913397: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
2023-03-26 13:58:12.913418: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
global_step key not found in model
Traceback (most recent call last):
  File "/content/diffusers/./scripts/convert_original_stable_diffusion_to_diffusers.py", line 128, in <module>
    pipe = download_from_original_stable_diffusion_ckpt(
  File "/usr/local/lib/python3.9/dist-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1157, in download_from_original_stable_diffusion_ckpt
    converted_unet_checkpoint = convert_ldm_unet_checkpoint(
  File "/usr/local/lib/python3.9/dist-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 379, in convert_ldm_unet_checkpoint
    new_checkpoint["time_embedding.linear_1.weight"] = unet_state_dict["time_embed.0.weight"]
KeyError: 'time_embed.0.weight'

Last time it was some other error...

Error

So im running this:

python3 /root/diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path /sdcard/Insertion.safetensors --dump_path /root/Insert --from_safetensors

And getting this:

Traceback (most recent call last): File "/root/diffusers/./scripts/convert_original_stable_diffusion_to_diffusers.py", line 154, in pipe = download_from_original_stable_diffusion_ckpt( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: download_from_original_stable_diffusion_ckpt() got an unexpected keyword argument 'checkpoint_path_or_dict'

Any idea why?

i just one to be able to load a lora while using SD on termux Proot

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.