Giter VIP home page Giter VIP logo

stablediffusionui's Introduction

Languages

C C++ C# Lua Python Rust Java

Markup Languages

Markdown HTML5

Frameworks

.Net XAML Unreal Engine

Game modding

CP77 STALKER Skyrim MC

ML/DL

NumPy ONNX PyTorch

OS

Linux Windows FreeBSD

Stats

stablediffusionui's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stablediffusionui's Issues

[User Question - Advice Needed] Inpainting seems not to work properly. How to use correctly.

Hello good people!

I was looking up in the wiki and was searchig the web, but could figure out a solution on my own. I hope you can help me Could somebody advice please, how to inpaint with XUI correctly. I tried with positive and negative masks but the changes are made to the whole image and never to the masked or unmasked parts of the image. I created the mask in paint.net: a png image with transparency/erased pixels with the eraser tool for the masked region. Then I uploaded the source image and dragged and dropped the mask, described the desired changes ("blue dress" - original is green) and clicked on "Make!". However, the changes seem to be always applied to the whole picture and never to the masked area. I have the feeling that I might be creating masks wrongly or there is something wrong with the models I am using, but this is only a guess and might be misleading. Any Ideas? As a model I used analogMadness_v50.

Thanks a lot in advance for your help and support!

Getting errors when trying to run models

Im getting errors whenever i try to run a model using ONNX on a 6700xt. I dont get this error when i run using CPU

Traceback (most recent call last): File "${Workspace}\repo\diffusion_scripts\sd_onnx_safe.py", line 28, in <module> pipe = GetPipe(opt.mdlpath, opt.mode, True, opt.nsfw, False) File "${Workspace}\repo\diffusion_scripts\sd_xbackend.py", line 57, in GetPipe pipe = OnnxStableDiffusionPipeline.from_pretrained(Model, custom_pipeline="lpw_stable_diffusion_onnx", provider=prov, safety_checker=nsfw_pipe) File "${Workspace}\repo\onnx.venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 706, in from_pretrained config_dict = cls.load_config(cached_folder) File "${Workspace}\repo\onnx.venv\lib\site-packages\diffusers\configuration_utils.py", line 320, in load_config raise EnvironmentError( OSError: Error no file named model_index.json found in directory ${Workspace}\models\onnx/deliberate_v2.

Publishing/Including CUDA dll's

Is it possible to package a version of this that includes all necessary CUDA libraries, and Python?
Is there a redistributable cuda/python runtime installer package, like .NET?
It would be great to have a FAQ.

Log bug?

Hi,
while XUI is generating images. I constantly get the error message "log file used by another process", while there is no other programm using it.
I tried deleting all the logs to be sure, but the it gets the same with the logs it creates by itself.
If I can manage to stay away from the error messages, I can continue, but as soon as I accidentaly open one, the app crashes.

The programm also doesn't upscale every generated picture.
Maybe it's because those where the ones giving errors?

a couple of new ideas

Hello.

Thank you for this gui!

I can see you added lots of goodies such as controlnet etc... I know programming is hard but could you please add:

1.AnimateDiff = for making our pictures animated

  1. panorama images = UPDATE = [please ignore this request as panorama 360 images can be created by prompting only so no more need]

It will be great if they work with cpu too

Also, could u please make ur ui to include everything in the future releases? like all we have to do is extract and use it without downloading/installing python to download torch and other stuff, an example could be NMKD sd as it has everything in it.

kind regards

No models show

I'm trying out this but I'm unable to see any models once added to the models directory

3.1.2 Doesn't download ONNX packages.

After running XUI.exe, it asks to install ONNX packages, but instead running an install, it just starts the GUI.

(Also, 3.1.2's release is called "XUI.3.1.7z", rather than "XUI.3.1.2.7z")

ONNX error

RTL be like: 19045.10.0
Name - NVIDIA GeForce GTX 1650
DeviceID - VideoController1
AdapterRAM - 4293918720
AdapterDACType - Integrated RAMDAC
Monochrome - False
DriverVersion - 31.0.15.3598
VideoProcessor - NVIDIA GeForce GTX 1650
VideoArchitecture - 5
VideoMemoryType - 2
${Workspace}\repo\onnx.venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
You have disabled the safety checker for <class 'diffusers_modules.local.lpw_stable_diffusion_onnx.OnnxStableDiffusionLongPromptWeightingPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 .
Current device: onnx
txt2img
SD: Model preload: done
Load custom vae
Prompt: 1girl
Neg prompt:
Set seed to 830119422
txt2img

0%| | 0/20 [00:00<?, ?it/s]
5%|5 | 1/20 [00:12<04:06, 12.98s/it]
5%|5 | 1/20 [00:15<04:46, 15.07s/it]
Traceback (most recent call last):
File "${Workspace}\repo\diffusion_scripts\sd_onnx_safe.py", line 109, in
PipeDevice.MakeImage(pipe, data['Mode'], eta, prompt_tokens + data['Prompt'], prompt_neg_tokens + data['NegPrompt'], data['Steps'], data['Width'], data['Height'], seed, data['CFG'], data['ImgCFGScale'], data['Image'] , data['ImgScale'], data['Mask'], data['WorkingDir'], data['BatchSize'])
File "${Workspace}\repo\diffusion_scripts\sd_xbackend.py", line 368, in MakeImage
image=pipe(prompt=[prompt] * batch_size, height=height, width=width, num_inference_steps=steps, guidance_scale=scale, negative_prompt=prompt_neg, eta=eta, generator=rng)
File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\x3gol.cache\huggingface\modules\diffusers_modules\local\lpw_stable_diffusion_onnx.py", line 813, in call
noise_pred = self.unet(
File "${Workspace}\repo\onnx.venv\lib\site-packages\diffusers\pipelines\onnx_utils.py", line 60, in call
return self.model.run(None, inputs)
File "${Workspace}\repo\onnx.venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 210, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.Fail

how to fix this error

Host started...

RTL be like: 22621.10.0
Name - NVIDIA GeForce GTX 1650
DeviceID - VideoController1
AdapterRAM - 4293918720
AdapterDACType - Integrated RAMDAC
Monochrome - False
DriverVersion - 31.0.15.3179
VideoProcessor - NVIDIA GeForce GTX 1650
VideoArchitecture - 5
VideoMemoryType - 2
${Workspace}\repo\cuda.venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
You have passed a non-standard module StableDiffusionSafetyChecker(
(vision_model): CLIPVisionModel(
(vision_model): CLIPVisionTransformer(
(embeddings): CLIPVisionEmbeddings(
(patch_embedding): Conv2d(3, 1024, kernel_size=(14, 14), stride=(14, 14), bias=False)
(position_embedding): Embedding(257, 1024)
)
(pre_layrnorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(encoder): CLIPEncoder(
(layers): ModuleList(
(0-23): 24 x CLIPEncoderLayer(
(self_attn): CLIPAttention(
(k_proj): Linear(in_features=1024, out_features=1024, bias=True)
(v_proj): Linear(in_features=1024, out_features=1024, bias=True)
(q_proj): Linear(in_features=1024, out_features=1024, bias=True)
(out_proj): Linear(in_features=1024, out_features=1024, bias=True)
)
(layer_norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(mlp): CLIPMLP(
(activation_fn): QuickGELUActivation()
(fc1): Linear(in_features=1024, out_features=4096, bias=True)
(fc2): Linear(in_features=4096, out_features=1024, bias=True)
)
(layer_norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
)
)
)
(post_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
)
)
(visual_projection): Linear(in_features=1024, out_features=768, bias=False)
). We cannot verify whether it has the correct type
Current device: cuda
txt2img
Traceback (most recent call last):
File "${Workspace}\repo\diffusion_scripts\sd_cuda_safe.py", line 24, in
pipe.to(PipeDevice.device, fptype)
File "${Workspace}\repo\cuda.venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 670, in to
module.to(torch_device, torch_dtype)
File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 3 more times]
File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
You have passed a non-standard module StableDiffusionSafetyChecker(
(vision_model): CLIPVisionModel(
(vision_model): CLIPVisionTransformer(
(embeddings): CLIPVisionEmbeddings(
(patch_embedding): Conv2d(3, 1024, kernel_size=(14, 14), stride=(14, 14), bias=False)
(position_embedding): Embedding(257, 1024)
)
(pre_layrnorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(encoder): CLIPEncoder(
(layers): ModuleList(
(0-23): 24 x CLIPEncoderLayer(
(self_attn): CLIPAttention(
(k_proj): Linear(in_features=1024, out_features=1024, bias=True)
(v_proj): Linear(in_features=1024, out_features=1024, bias=True)
(q_proj): Linear(in_features=1024, out_features=1024, bias=True)
(out_proj): Linear(in_features=1024, out_features=1024, bias=True)
)
(layer_norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
(mlp): CLIPMLP(
(activation_fn): QuickGELUActivation()
(fc1): Linear(in_features=1024, out_features=4096, bias=True)
(fc2): Linear(in_features=4096, out_features=1024, bias=True)
)
(layer_norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
)
)
)
(post_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
)
)
(visual_projection): Linear(in_features=1024, out_features=768, bias=False)
). We cannot verify whether it has the correct type
${Workspace}\repo\cuda.venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
Current device: cuda
txt2img
Traceback (most recent call last):
File "${Workspace}\repo\diffusion_scripts\sd_cuda_safe.py", line 24, in
pipe.to(PipeDevice.device, fptype)
File "${Workspace}\repo\cuda.venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 670, in to
module.to(torch_device, torch_dtype)
File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "${Workspace}\repo\cuda.venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)

Error while importing model

  1. When importing a model UI is confusing, no progress bar, import button doesnt grey out, etc...

  2. After a while i got this prompt:

======================= 0 NONE 0 NOTE 0 WARNING 1 ERROR ========================
File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\onnx\utils.py", line 506, in export
ERROR: missing-standard-symbolic-function
_export(
=========================================
File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\onnx\utils.py", line 1548, in _export
Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 14 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.
graph, params_dict, torch_out = _model_to_graph(
None
File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\onnx\utils.py", line 1117, in _model_to_graph

graph = _optimize_graph(
File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\onnx\utils.py", line 665, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\onnx\utils.py", line 1901, in _run_symbolic_function
raise errors.UnsupportedOperatorError(
torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 14 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.
${Workspace}\models\shark>exit

Thanks

ONNX

Why can't I choose ONNX or Vulkan?
I use vega 56 gpu

Screenshot_43

sd_onnx_safe.py: error:

after converting a models and try to make image I got this error.

Screenshot 2024-01-11 150848

I can't use txt2img prompt, anyone know how to fix this?

I can't install models

How can I import the normal models that are downloaded from any site?
When I try to do it, it rejects me

Startup extract ckpt(D:\SD\stable-diffusion-webui\models\Stable-diffusion\abyssorangemix3AOM3_aom3a1.safetensors).....

Microsoft Windows [Version 10.0.19045.3208]
(c) Microsoft Corporation. Todos los derechos reservados.
global_step key not found in model
Traceback (most recent call last):
File "${Workspace}\repo\diffusion_scripts\convert\convert_original_stable_diffusion_to_diffusers.py", line 103, in
pipe = download_from_original_stable_diffusion_ckpt(
File "${Workspace}\repo\onnx.venv\lib\site-packages\diffusers\pipelines\stable_diffusion\convert_from_ckpt.py", line 1291, in download_from_original_stable_diffusion_ckpt
text_model = convert_ldm_clip_checkpoint(checkpoint)
File "${Workspace}\repo\onnx.venv\lib\site-packages\diffusers\pipelines\stable_diffusion\convert_from_ckpt.py", line 737, in convert_ldm_clip_checkpoint
text_model.load_state_dict(text_model_dict)
File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\nn\modules\module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for CLIPTextModel:
Unexpected key(s) in state_dict: "text_model.embeddings.position_ids".
${Workspace}\repo>"onnx.venv/Scripts/python.exe" "./diffusion_scripts/convert/convert_diffusers_to_onnx.py" --model_path="D:/SDTEST/XUI.3.3.Preview/models/diffusers/abyssorangemix3AOM3_aom3a1" --output_path="${Workspace}\models\onnx/abyssorangemix3AOM3_aom3a1"
${Workspace}\repo\onnx.venv\lib\site-packages\diffusers\models\cross_attention.py:30: FutureWarning: Importing from cross_attention is deprecated. Please import from diffusers.models.attention_processor instead.
deprecate(
${Workspace}\models\onnx/abyssorangemix3AOM3_aom3a1
Traceback (most recent call last):
File "${Workspace}\repo\diffusion_scripts\convert\convert_diffusers_to_onnx.py", line 366, in
convert_models(args.model_path, args.output_path, args.opset, args.fp16)
File "${Workspace}\repo\onnx.venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "${Workspace}\repo\diffusion_scripts\convert\convert_diffusers_to_onnx.py", line 80, in convert_models
pipeline = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=dtype).to(device)
File "${Workspace}\repo\onnx.venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 903, in from_pretrained
config_dict = cls.load_config(cached_folder)
File "${Workspace}\repo\onnx.venv\lib\site-packages\diffusers\configuration_utils.py", line 350, in load_config
raise EnvironmentError(
OSError: Error no file named model_index.json found in directory D:/SDTEST/XUI.3.3.Preview/models/diffusers/abyssorangemix3AOM3_aom3a1.
${Workspace}\repo>exit

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.