Giter VIP home page Giter VIP logo

unstablefusion's People

Contributors

ahrm avatar codefaux avatar eahenle avatar jeffmcjunkin avatar snekcode avatar undefdev avatar wertzui123 avatar xbagon avatar zerocool940711 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

unstablefusion's Issues

Make the canvas scrollable.

Hi there, I found this repo recently and after testing it a bit I fell in love with the way it works and the results you can get with it, Ive test other similar inpainting implementations but this one is pretty solid. One thing I did notice is the lack of scroll on the canvas or the area where the image is drawn/generated, with this tool it is possible to slowly create a huge image made in chunks with anything we want on it, not being able to scroll to the sides or up and down reduces the workable area we have and makes it so the size of what we can create is limited, if we had some scroll bars on the side or bottom part of the canvas so we can move it would be awesome and the possibilities with it are endless, even better would be to have a shortcut or something like middle mouse button for scrolling in the mouse direction, an extra thing that would be nice to have would be zooming in an out. I really hope some of these features could be added as it would improve things a lot. Thanks for your time, and thanks for making this :)

Use my own model?

Can you provide instructions for using this without downloading the model from hugging faces?

I don't want to use my API key

Any prompt "Torch not compiled with CUDA enabled"

Fetching 16 files: 100%|█████████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 3181.27it/s]
Traceback (most recent call last):
File "unstablefusion.py", line 889, in handle_inpaint_button
inpainted_image = self.get_handler().inpaint(prompt,
File "unstablefusion.py", line 436, in get_handler
return self.stable_diffusion_manager.get_handler()
File "unstablefusion.py", line 318, in get_handler
return self.get_local_handler(self.get_huggingface_token())
File "unstablefusion.py", line 301, in get_local_handler
self.cached_local_handler = StableDiffusionHandler(token)
File "E:\AI\SD\SDUI\UnstableFusion-main\diffusionserver.py", line 27, in init
self.text2img = StableDiffusionPipeline.from_pretrained(
File "e:\Anaconda3\envs\ldm\lib\site-packages\diffusers\pipeline_utils.py", line 179, in to
module.to(torch_device)
File "e:\Anaconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 907, in to
return self._apply(convert)
File "e:\Anaconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 578, in _apply
module._apply(fn)
File "e:\Anaconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 578, in _apply
module._apply(fn)
File "e:\Anaconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 578, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "e:\Anaconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 601, in apply
param_applied = fn(param)
File "e:\Anaconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 905, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "e:\Anaconda3\envs\ldm\lib\site-packages\torch\cuda_init
.py", line 210, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Crash when clicking blank canvas

Just as listed - start the app, click the blank window anywhere. App freezes, app explodes. Implodes? Anyway, lol

Current commit is b7ae990 (If I did that right)

  File "C:\Code\git\UnstableFusion\unstablefusion.py", line 633, in mousePressEvent
    pos = self.window_to_image_point(e.pos())
  File "C:\Code\git\UnstableFusion\unstablefusion.py", line 660, in window_to_image_point
    return QPoint(new_x, new_y)
TypeError: arguments did not match any overloaded call:
  QPoint(): too many arguments
  QPoint(int, int): argument 1 has unexpected type 'float'
  QPoint(QPoint): argument 1 has unexpected type 'float'```

Installed requirements, but getting this...

$ python unstablefusion.py
QObject::moveToThread: Current thread (0x560370c73760) is not the object's thread (0x560374ec5b20).
Cannot move to target thread (0x560370c73760)

qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/user/anaconda3/lib/python3.9/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: xcb, eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl.

Aborted (core dumped)

Add button to completely clear the canvas.

This is a minor improvement but sometimes we might just want to start over and remove everything on the canvas, unless I missed the option for it right now we can only either manually erase everything on the canvas or restart the app which will unload the model in memory, a button to clear the canvas and start over would be a nice feature to have, a few other things that would be awesome to have would be making it so the inference/generation runs on a different thread so the GUI doesn't get blocked and unresponsive, this could potentially make it so we can update the image as its being generated, also, would be nice to have a button to stop the generation in case we accidentally started it when we didn't mean to or when we think it will take too long and we need to adjust some options and then rerun it.

UserWarning: CUDA initialization: CUDA unknown error

Getting this error immediately after running python unstablefusion.py, gui is still loaded:

/home/user/anaconda3/lib/python3.9/site-packages/torch/cuda/__init__.py:88: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at /opt/conda/conda-bld/pytorch_1665040357079/work/c10/cuda/CUDAFunctions.cpp:109.)
  return torch._C._cuda_getDeviceCount() > 0
/home/user/anaconda3/lib/python3.9/site-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.23.3
  warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"

and this one after hitting "Generate":

Fetching 16 files: 100%|███████████████████████████████████| 16/16 [00:00<00:00, 79512.87it/s]
Traceback (most recent call last):
  File "/home/user/Developer/UnstableFusion/unstablefusion.py", line 856, in handle_generate_button
    image = self.get_handler().generate(prompt,
  File "/home/user/Developer/UnstableFusion/unstablefusion.py", line 436, in get_handler
    return self.stable_diffusion_manager.get_handler()
  File "/home/user/Developer/UnstableFusion/unstablefusion.py", line 318, in get_handler
    return self.get_local_handler(self.get_huggingface_token())
  File "/home/user/Developer/UnstableFusion/unstablefusion.py", line 301, in get_local_handler
    self.cached_local_handler = StableDiffusionHandler(token)
  File "/home/user/Developer/UnstableFusion/diffusionserver.py", line 27, in __init__
    self.text2img = StableDiffusionPipeline.from_pretrained(
  File "/home/user/anaconda3/lib/python3.9/site-packages/diffusers/pipeline_utils.py", line 179, in to
    module.to(torch_device)
  File "/home/user/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 987, in to
    return self._apply(convert)
  File "/home/user/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 639, in _apply
    module._apply(fn)
  File "/home/user/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 639, in _apply
    module._apply(fn)
  File "/home/user/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 639, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "/home/user/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 662, in _apply
    param_applied = fn(param)
  File "/home/user/anaconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 985, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
  File "/home/user/anaconda3/lib/python3.9/site-packages/torch/cuda/__init__.py", line 227, in _lazy_init
    torch._C._cuda_init()
RuntimeError: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero.
  • RTX 3090
  • CUDA 11.8
  • Ubuntu 22.04
  • nvidia-smi output:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 520.61.05    Driver Version: 520.61.05    CUDA Version: 11.8     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0  On |                  N/A |
| 44%   59C    P0   106W / 350W |   4860MiB / 24576MiB |     14%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      2358      G   /usr/lib/xorg/Xorg               3038MiB |
|    0   N/A  N/A      2714      G   /usr/bin/gnome-shell              442MiB |
|    0   N/A  N/A    177728      G   ...187800556795193677,131072      924MiB |
|    0   N/A  N/A    177762      G   ...AAAAAAAAA= --shared-files      125MiB |
|    0   N/A  N/A    224062      G   ...RendererForSitePerProcess      228MiB |
+-----------------------------------------------------------------------------+
  • conda list output:
Conda env installed packages
# Name                    Version                   Build  Channel
_libgcc_mutex             0.1                        main  
_openmp_mutex             5.1                       1_gnu  
anyio                     3.5.0           py310h06a4308_0  
argon2-cffi               21.3.0             pyhd3eb1b0_0  
argon2-cffi-bindings      21.2.0          py310h7f8727e_0  
asttokens                 2.0.5              pyhd3eb1b0_0  
attrs                     21.4.0             pyhd3eb1b0_0  
babel                     2.9.1              pyhd3eb1b0_0  
backcall                  0.2.0              pyhd3eb1b0_0  
beautifulsoup4            4.11.1          py310h06a4308_0  
blas                      1.0                         mkl  
bleach                    4.1.0              pyhd3eb1b0_0  
brotlipy                  0.7.0           py310h7f8727e_1002  
bzip2                     1.0.8                h7b6447c_0  
ca-certificates           2022.07.19           h06a4308_0  
certifi                   2022.9.24       py310h06a4308_0  
cffi                      1.15.1          py310h74dc2b5_0  
charset-normalizer        2.0.4              pyhd3eb1b0_0  
cryptography              37.0.1          py310h9ce1e76_0  
cuda                      11.7.1                        0    nvidia
cuda-cccl                 11.7.91                       0    nvidia
cuda-command-line-tools   11.7.1                        0    nvidia
cuda-compiler             11.7.1                        0    nvidia
cuda-cudart               11.7.99                       0    nvidia
cuda-cudart-dev           11.7.99                       0    nvidia
cuda-cuobjdump            11.7.91                       0    nvidia
cuda-cupti                11.7.101                      0    nvidia
cuda-cuxxfilt             11.7.91                       0    nvidia
cuda-demo-suite           11.8.86                       0    nvidia
cuda-documentation        11.8.86                       0    nvidia
cuda-driver-dev           11.7.99                       0    nvidia
cuda-gdb                  11.8.86                       0    nvidia
cuda-libraries            11.7.1                        0    nvidia
cuda-libraries-dev        11.7.1                        0    nvidia
cuda-memcheck             11.8.86                       0    nvidia
cuda-nsight               11.8.86                       0    nvidia
cuda-nsight-compute       11.8.0                        0    nvidia
cuda-nvcc                 11.7.99                       0    nvidia
cuda-nvdisasm             11.8.86                       0    nvidia
cuda-nvml-dev             11.7.91                       0    nvidia
cuda-nvprof               11.8.87                       0    nvidia
cuda-nvprune              11.7.91                       0    nvidia
cuda-nvrtc                11.7.99                       0    nvidia
cuda-nvrtc-dev            11.7.99                       0    nvidia
cuda-nvtx                 11.7.91                       0    nvidia
cuda-nvvp                 11.8.87                       0    nvidia
cuda-runtime              11.7.1                        0    nvidia
cuda-sanitizer-api        11.8.86                       0    nvidia
cuda-toolkit              11.7.1                        0    nvidia
cuda-tools                11.7.1                        0    nvidia
cuda-visual-tools         11.7.1                        0    nvidia
dbus                      1.13.18              hb2f20db_0  
debugpy                   1.5.1           py310h295c915_0  
decorator                 5.1.1              pyhd3eb1b0_0  
defusedxml                0.7.1              pyhd3eb1b0_0  
entrypoints               0.4             py310h06a4308_0  
executing                 0.8.3              pyhd3eb1b0_0  
expat                     2.4.9                h6a678d5_0  
ffmpeg                    4.2.2                h20bf706_0  
fontconfig                2.13.1               h6c09931_0  
freetype                  2.11.0               h70c0345_0  
gds-tools                 1.4.0.31                      0    nvidia
giflib                    5.2.1                h7b6447c_0  
glib                      2.69.1               h4ff587b_1  
gmp                       6.2.1                h295c915_3  
gnutls                    3.6.15               he1e5248_0  
gst-plugins-base          1.14.0               h8213a91_2  
gstreamer                 1.14.0               h28cd5cc_2  
icu                       58.2                 he6710b0_3  
idna                      3.3                pyhd3eb1b0_0  
intel-openmp              2021.4.0          h06a4308_3561  
ipykernel                 6.15.2          py310h06a4308_0  
ipython                   8.4.0           py310h06a4308_0  
ipython_genutils          0.2.0              pyhd3eb1b0_1  
ipywidgets                7.6.5              pyhd3eb1b0_1  
jedi                      0.18.1          py310h06a4308_1  
jinja2                    3.0.3              pyhd3eb1b0_0  
jpeg                      9e                   h7f8727e_0  
json5                     0.9.6              pyhd3eb1b0_0  
jsonschema                4.16.0          py310h06a4308_0  
jupyter                   1.0.0           py310h06a4308_8  
jupyter_client            7.3.5           py310h06a4308_0  
jupyter_console           6.4.3              pyhd3eb1b0_0  
jupyter_core              4.11.1          py310h06a4308_0  
jupyter_server            1.18.1          py310h06a4308_0  
jupyterlab                3.4.4           py310h06a4308_0  
jupyterlab_pygments       0.1.2                      py_0  
jupyterlab_server         2.15.2          py310h06a4308_0  
jupyterlab_widgets        1.0.0              pyhd3eb1b0_1  
krb5                      1.19.2               hac12032_0  
lame                      3.100                h7b6447c_0  
lcms2                     2.12                 h3be6417_0  
ld_impl_linux-64          2.38                 h1181459_1  
lerc                      3.0                  h295c915_0  
libclang                  10.0.1          default_hb85057a_2  
libcublas                 11.11.3.6                     0    nvidia
libcublas-dev             11.11.3.6                     0    nvidia
libcufft                  10.9.0.58                     0    nvidia
libcufft-dev              10.9.0.58                     0    nvidia
libcufile                 1.4.0.31                      0    nvidia
libcufile-dev             1.4.0.31                      0    nvidia
libcurand                 10.3.0.86                     0    nvidia
libcurand-dev             10.3.0.86                     0    nvidia
libcusolver               11.4.1.48                     0    nvidia
libcusolver-dev           11.4.1.48                     0    nvidia
libcusparse               11.7.5.86                     0    nvidia
libcusparse-dev           11.7.5.86                     0    nvidia
libdeflate                1.8                  h7f8727e_5  
libedit                   3.1.20210910         h7f8727e_0  
libevent                  2.1.12               h8f2d780_0  
libffi                    3.3                  he6710b0_2  
libgcc-ng                 11.2.0               h1234567_1  
libgomp                   11.2.0               h1234567_1  
libidn2                   2.3.2                h7f8727e_0  
libllvm10                 10.0.1               hbcb73fb_5  
libnpp                    11.8.0.86                     0    nvidia
libnpp-dev                11.8.0.86                     0    nvidia
libnvjpeg                 11.9.0.86                     0    nvidia
libnvjpeg-dev             11.9.0.86                     0    nvidia
libopus                   1.3.1                h7b6447c_0  
libpng                    1.6.37               hbc83047_0  
libpq                     12.9                 h16c4e8d_3  
libsodium                 1.0.18               h7b6447c_0  
libstdcxx-ng              11.2.0               h1234567_1  
libtasn1                  4.16.0               h27cfd23_0  
libtiff                   4.4.0                hecacb30_0  
libunistring              0.9.10               h27cfd23_0  
libuuid                   1.0.3                h7f8727e_2  
libvpx                    1.7.0                h439df22_0  
libwebp                   1.2.2                h55f646e_0  
libwebp-base              1.2.2                h7f8727e_0  
libxcb                    1.15                 h7f8727e_0  
libxkbcommon              1.0.1                hfa300c1_0  
libxml2                   2.9.14               h74e7548_0  
libxslt                   1.1.35               h4e12654_0  
lz4-c                     1.9.3                h295c915_1  
markupsafe                2.1.1           py310h7f8727e_0  
matplotlib-inline         0.1.6           py310h06a4308_0  
mistune                   0.8.4           py310h7f8727e_1000  
mkl                       2021.4.0           h06a4308_640  
mkl-service               2.4.0           py310h7f8727e_0  
mkl_fft                   1.3.1           py310hd6ae3a3_0  
mkl_random                1.2.2           py310h00e6091_0  
nbclassic                 0.3.5              pyhd3eb1b0_0  
nbclient                  0.5.13          py310h06a4308_0  
nbconvert                 6.4.4           py310h06a4308_0  
nbformat                  5.5.0           py310h06a4308_0  
ncurses                   6.3                  h5eee18b_3  
nest-asyncio              1.5.5           py310h06a4308_0  
nettle                    3.7.3                hbbd107a_1  
notebook                  6.4.12          py310h06a4308_0  
nsight-compute            2022.3.0.22                   0    nvidia
nspr                      4.33                 h295c915_0  
nss                       3.74                 h0370c37_0  
numpy                     1.23.1          py310h1794996_0  
numpy-base                1.23.1          py310hcba007f_0  
openh264                  2.1.1                h4ff587b_0  
openssl                   1.1.1q               h7f8727e_0  
packaging                 21.3               pyhd3eb1b0_0  
pandocfilters             1.5.0              pyhd3eb1b0_0  
parso                     0.8.3              pyhd3eb1b0_0  
pcre                      8.45                 h295c915_0  
pexpect                   4.8.0              pyhd3eb1b0_3  
pickleshare               0.7.5           pyhd3eb1b0_1003  
pillow                    9.2.0           py310hace64e9_1  
pip                       22.2.2          py310h06a4308_0  
ply                       3.11            py310h06a4308_0  
prometheus_client         0.14.1          py310h06a4308_0  
prompt-toolkit            3.0.20             pyhd3eb1b0_0  
prompt_toolkit            3.0.20               hd3eb1b0_0  
psutil                    5.9.0           py310h5eee18b_0  
ptyprocess                0.7.0              pyhd3eb1b0_2  
pure_eval                 0.2.2              pyhd3eb1b0_0  
pycparser                 2.21               pyhd3eb1b0_0  
pygments                  2.11.2             pyhd3eb1b0_0  
pyopenssl                 22.0.0             pyhd3eb1b0_0  
pyparsing                 3.0.9           py310h06a4308_0  
pyqt                      5.15.7          py310h6a678d5_1  
pyqt5-sip                 12.11.0                  pypi_0    pypi
pyrsistent                0.18.0          py310h7f8727e_0  
pysocks                   1.7.1           py310h06a4308_0  
python                    3.10.6               haa1d7c7_0  
python-dateutil           2.8.2              pyhd3eb1b0_0  
python-fastjsonschema     2.16.2          py310h06a4308_0  
pytorch                   1.14.0.dev20221009    py3.10_cpu_0    pytorch-nightly
pytorch-cuda              11.7                 h67b0de4_0    pytorch-nightly
pytorch-mutex             1.0                         cpu    pytorch-nightly
pytz                      2022.1          py310h06a4308_0  
pyzmq                     23.2.0          py310h6a678d5_0  
qt-main                   5.15.2               h327a75a_7  
qt-webengine              5.15.9               hd2b0992_4  
qtconsole                 5.3.2           py310h06a4308_0  
qtpy                      2.2.0           py310h06a4308_0  
qtwebkit                  5.212                h4eab89a_4  
readline                  8.1.2                h7f8727e_1  
requests                  2.28.1          py310h06a4308_0  
send2trash                1.8.0              pyhd3eb1b0_1  
setuptools                63.4.1          py310h06a4308_0  
sip                       6.6.2           py310h6a678d5_0  
six                       1.16.0             pyhd3eb1b0_1  
sniffio                   1.2.0           py310h06a4308_1  
soupsieve                 2.3.1              pyhd3eb1b0_0  
sqlite                    3.39.3               h5082296_0  
stack_data                0.2.0              pyhd3eb1b0_0  
terminado                 0.13.1          py310h06a4308_0  
testpath                  0.6.0           py310h06a4308_0  
tk                        8.6.12               h1ccaba5_0  
toml                      0.10.2             pyhd3eb1b0_0  
torchaudio                0.13.0.dev20221009       py310_cpu    pytorch-nightly
torchvision               0.15.0.dev20221009       py310_cpu    pytorch-nightly
tornado                   6.2             py310h5eee18b_0  
traitlets                 5.1.1              pyhd3eb1b0_0  
typing-extensions         4.3.0           py310h06a4308_0  
typing_extensions         4.3.0           py310h06a4308_0  
tzdata                    2022c                h04d1e81_0  
urllib3                   1.26.11         py310h06a4308_0  
wcwidth                   0.2.5              pyhd3eb1b0_0  
webencodings              0.5.1           py310h06a4308_1  
websocket-client          0.58.0          py310h06a4308_4  
wheel                     0.37.1             pyhd3eb1b0_0  
widgetsnbextension        3.5.2           py310h06a4308_0  
x264                      1!157.20191217       h7b6447c_0  
xz                        5.2.6                h5eee18b_0  
zeromq                    4.3.4                h2531618_0  
zlib                      1.2.12               h5eee18b_3  
zstd                      1.5.2                ha4553b6_0

Please advise!

P.S. Can I use the checkpoint I've got downloaded already?

Add support for the latest version of the diffusers library.

It seems like the latest version of diffusers had some huge improvements on performance, I tried modifying the code to work with the latest diffusers and had partial success with doing so, I was able to get the generation working but not inpainting, I tested version 0.5.0 of diffusers and I was getting like 1.6it/s, then version 0.6.0 was giving me around 3-5it/s. What I did was just change the lines with ["sample"][0] to [0][0] like in

im = self.text2img(
                prompt=prompt,
                width=512,
                height=512,
                strength=strength,
                num_inference_steps=steps,
                guidance_scale=guidance_scale,
                callback=callback,
                negative_prompt=negative_prompt,
                generator=self.get_generator(seed)
            )["sample"][0]

I just replaced it to be

im = self.text2img(
                prompt=prompt,
                width=512,
                height=512,
                strength=strength,
                num_inference_steps=steps,
                guidance_scale=guidance_scale,
                callback=callback,
                negative_prompt=negative_prompt,
                generator=self.get_generator(seed)
            )[0][0]

This should return in theory the correct image but for some reason inpainting doesn't work, generation and reimaging do work tho, my guess is that the mask used for inpainting doesnt match the input image or the unet config, this is the error I get when I try to run inpainting with the modifications I mentioned before

ValueError: Incorrect configuration settings! The config of `pipeline.unet`: FrozenDict([('sample_size', 64), ('in_channels', 4), 
('out_channels', 4), ('center_input_sample', False), ('flip_sin_to_cos', True), ('freq_shift', 0),
 ('down_block_types', ['CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D']), ('up_block_types', ['UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D']), ('block_out_channels', [320, 640, 1280, 1280]), 
('layers_per_block', 2), ('downsample_padding', 1), ('mid_block_scale_factor', 1), ('act_fn', 'silu'), ('norm_num_groups', 32), ('norm_eps', 
1e-05), ('cross_attention_dim', 768), ('attention_head_dim', 8), ('_class_name', 'UNet2DConditionModel'), ('_diffusers_version', '0.6.0'), 
('_name_or_path', 
'C:\\Users\\ZeroCool\\.cache\\huggingface\\diffusers\\models--CompVis--stable-diffusion-v1-4\\snapshots\\a304b1ab1b59dd6c3ba9c40705c29c6de4144096\\unet')]) expects 4 but received `num_channels_latents`: 4 + `num_channels_mask`: 1 + `num_channels_masked_image`: 4 = 9.
 Please verify the config of `pipeline.unet` or your `mask_image` or `image` input.

Hope this helps somehow to reduce the amount of stuff needed for adding support for the latest diffusers. Thanks for the time and have a good day.

Missing LICENSE

I see you have no LICENSE file for this project. The default is copyright.

I would suggest releasing the code under the GPL-3.0-or-later or AGPL-3.0-or-later license so that others are encouraged to contribute changes back to your project.

Connect to Automatic1111's implementation as back end?

I've been successfully running stable diffusion locally using Automatic1111's webui (https://github.com/AUTOMATIC1111/stable-diffusion-webui) and wanted to try your front end (which looks cool). So, I already have the weights downloaded and have tinkered with things like textual inversion which generate additional embedding models. When I try to run UnstableFusion, I am asked for a huggingface token, which I imagine means you are going to download the model weights again. Is there a way to just point your code at the weights I already have? Or, better yet, is there a way to treat Automatic's implementation as a backend hosted on localhost? There is all kinds of innovation which has been implemented there (increasing/decreasing attention, prompt switching partway through generation, etc) which it would be great to be able to take advantage of while still using your nice innovations on the front end.

Unexpected keyword argument when generating images

I am getting the following error when attempting to generate (most recent commit, running locally, Ubuntu 22, python 3.9)

Generating with strength 0.75, steps 30, guidance_scale 7.5, seed -1
Traceback (most recent call last):
  File "/home/adrian/UnstableFusion/unstablefusion.py", line 912, in handle_generate_button
    image = self.get_handler().generate(prompt,
  File "/home/adrian/UnstableFusion/diffusionserver.py", line 117, in generate
    im = self.text2img(
  File "/home/adrian/.local/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
TypeError: __call__() got an unexpected keyword argument 'strength'

This looks a little bit like #3 but protobuf is already installed. Commenting out the strength=strength, on diffusionserver.py:121 removes the error and allows image generation, but obviously disables an important parameter.

No Generation / Inpainting

This is a very cool project, thank you for sharing!

Both locally and from the colab, I am given an error popup when trying to generate or inpaint. I am able to free-paint on the canvas with the cursor.

I'm running on Windows 10, ensured to have every dependency, and ran Conda Powershell as an admin.

An error I received when running the server:

DeprecationWarning: an integer is required (got type float). Implicit conversion to integers using int is deprecated, and may be removed in a future version of Python.
strength_slider.setValue(value)

I also have an issue with the Tool interface being too tall for my screen, but that's likely because I'm using an unusual resolution -- I doubt it is the cause, but thought it worth mentioning.

Thanks!

Colab Notebook error (ImportError)

I'm usually running the app with Colab servers with great success. Now I'm getting an error in the notebook, when running this step UnstableFusion.diffusionserver import run_app

ImportError                               Traceback (most recent call last)
[<ipython-input-8-38e5886b075e>](https://localhost:8080/#) in <module>
----> 1 from UnstableFusion.diffusionserver import run_app

[/content/UnstableFusion/diffusionserver.py](https://localhost:8080/#) in <module>
      4 from PIL import Image
      5 from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline
----> 6 from diffusers import StableDiffusionInpaintPipelineLegacy
      7 
      8 from torch import autocast

ImportError: cannot import name 'StableDiffusionInpaintPipelineLegacy' from 'diffusers' (/usr/local/lib/python3.7/dist-packages/diffusers/__init__.py)

Crash when loading modifiers without previously saving them

If you click the "Load Modifiers" button before saving them the app crashes, I got the following error in console:

Traceback (most recent call last):
  File "C:\Users\davic\Desktop\AI\UnstableFusion\unstablefusion.py", line 1129, in handle_load_modifiers
    mods = load_modifiers()
  File "C:\Users\davic\Desktop\AI\UnstableFusion\unstablefusion.py", line 51, in load_modifiers
    with open(get_modifiers_path(), 'r') as infile:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\davic\\Desktop\\AI\\UnstableFusion\\modifiers.txt'

Add zoom shortkey

This repository is fantastic. Is there a zoom in/out shortkey somewhere? I can't find it in the shortkeys. There is a key for increase and decrease size but that will simply crop the image. When I open an image that is larger than my screen there is no way of moving out of the visible area. There is no scrollbar either.

TypeError: Descriptors cannot not be created directly.

C:\Users\ZeroCool22\Desktop\UnstableFusion\UnstableFusion>python unstablefusion.py
Traceback (most recent call last):
  File "C:\Users\ZeroCool22\Desktop\UnstableFusion\UnstableFusion\unstablefusion.py", line 7, in <module>
    from diffusionserver import StableDiffusionHandler
  File "C:\Users\ZeroCool22\Desktop\UnstableFusion\UnstableFusion\diffusionserver.py", line 4, in <module>
    from diffusers import StableDiffusionPipeline, StableDiffusionInpaintPipeline, StableDiffusionImg2ImgPipeline
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\__init__.py", line 26, in <module>
    from .pipelines import DDIMPipeline, DDPMPipeline, KarrasVePipeline, LDMPipeline, PNDMPipeline, ScoreSdeVePipeline
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\pipelines\__init__.py", line 11, in <module>
    from .latent_diffusion import LDMTextToImagePipeline
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\pipelines\latent_diffusion\__init__.py", line 6, in <module>
    from .pipeline_latent_diffusion import LDMBertModel, LDMTextToImagePipeline
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\pipelines\latent_diffusion\pipeline_latent_diffusion.py", line 12, in <module>
    from transformers.modeling_utils import PreTrainedModel
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\modeling_utils.py", line 75, in <module>
    from accelerate import __version__ as accelerate_version
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\__init__.py", line 7, in <module>
    from .accelerator import Accelerator
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\accelerator.py", line 33, in <module>
    from .tracking import LOGGER_TYPE_TO_CLASS, GeneralTracker, filter_trackers
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\accelerate\tracking.py", line 34, in <module>
    import wandb
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\wandb\__init__.py", line 26, in <module>
    from wandb import sdk as wandb_sdk
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\wandb\sdk\__init__.py", line 9, in <module>
    from .wandb_init import _attach, init  # noqa: F401
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\wandb\sdk\wandb_init.py", line 30, in <module>
    from . import wandb_login, wandb_setup
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\wandb\sdk\wandb_login.py", line 25, in <module>
    from .wandb_settings import Settings, Source
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\wandb\sdk\wandb_settings.py", line 39, in <module>
    from wandb.sdk.wandb_setup import _EarlyLogger
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\wandb\sdk\wandb_setup.py", line 22, in <module>
    from . import wandb_manager, wandb_settings
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\wandb\sdk\wandb_manager.py", line 14, in <module>
    from wandb.sdk.lib.proto_util import settings_dict_from_pbmap
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\wandb\sdk\lib\proto_util.py", line 6, in <module>
    from wandb.proto import wandb_internal_pb2 as pb
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\wandb\proto\wandb_internal_pb2.py", line 15, in <module>
    from wandb.proto import wandb_base_pb2 as wandb_dot_proto_dot_wandb__base__pb2
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\wandb\proto\wandb_base_pb2.py", line 36, in <module>
    _descriptor.FieldDescriptor(
  File "C:\Users\ZeroCool22\AppData\Local\Programs\Python\Python310\lib\site-packages\google\protobuf\descriptor.py", line 560, in __new__
    _message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

C:\Users\ZeroCool22\Desktop\UnstableFusion\UnstableFusion>

Inpaint changes parts of original image

Not sure if it is an issue that could or should be solved.
Inpaint seem to be using image mask to draw-in parts of image that are transparent.
Despite that it changes already painted (non-transparent) parts of image too.

How to reproduce:

  • fill entire image with one color
  • erase a hole that is smaller than 512x512
  • increase selection size to bigger than transparent hole
  • position selection in such a way that it covers entire hole and has all edges on solid color
  • use Inpaint

Expected result:

  • transparent hole is filled with whatever Diffusion has produced
  • parts that were already painted (solid color) are not changed

Observed result:

  • transparent hole is filled with whatever Diffusion has produced
  • solid color parts have very noticeable selection edge etched into it
  • solid color seem to be changed to slightly different hue

While using on regular images repeated Inpaint-s that cover same area seem to change contrast/saturation of previously painted parts.
How to reproduce:

  • resize selection to something small, for example 64x64
  • generate something at center of image
  • resize selection to larger size, for example 128x128
  • inpaint several times while covering entire original 64x64 square

Expected result:

  • transparent parts of image are painted
  • original 64x64 square is not changed

Observed result:

  • over several iterations of Inpaint original square becomes more and more oversharpened and over-saturated

Another easy way to get this result is to Generate anything and then Inpaint many times without changing selection. Doing Inpaint 10 times or so will turn almost any result into colorful noise.

As far as I was able to debug both of those are manifestations of same issue, maybe something to do with masking.

NameError: 'StableDiffusionHandler' is not defined

The full traceback is:

Traceback (most recent call last):
  File "/home/pokemon343638/UnstableFusion-main/unstablefusion.py", line 897, in handle_generate_button
    if type(self.get_handler()) == ServerStableDiffusionHandler:
  File "/home/pokemon343638/UnstableFusion-main/unstablefusion.py", line 460, in get_handler
    return self.stable_diffusion_manager.get_handler()
  File "/home/pokemon343638/UnstableFusion-main/unstablefusion.py", line 329, in get_handler
    return self.get_local_handler(self.get_huggingface_token())
  File "/home/pokemon343638/UnstableFusion-main/unstablefusion.py", line 312, in get_local_handler
    self.cached_local_handler = StableDiffusionHandler(token)
NameError: name 'StableDiffusionHandler' is not defined

Executing the instructions in google collab returns an error

Fetching 15 files: 100%
15/15 [00:00<00:00, 545.25it/s]


ValueError Traceback (most recent call last)

in
----> 1 run_app()

2 frames

/usr/local/lib/python3.8/dist-packages/diffusers/pipeline_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
514 class_obj()
515
--> 516 raise ValueError(
517 f"The component {class_obj} of {pipeline_class} cannot be loaded as it does not seem to have"
518 f" any of the loading methods defined in {ALL_IMPORTABLE_CLASSES}."

ValueError: The component <class 'transformers.models.clip.image_processing_clip.CLIPImageProcessor'> of <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> cannot be loaded as it does not seem to have any of the loading methods defined in {'ModelMixin': ['save_pretrained', 'from_pretrained'], 'SchedulerMixin': ['save_config', 'from_config'], 'DiffusionPipeline': ['save_pretrained', 'from_pretrained'], 'OnnxRuntimeModel': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizer': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizerFast': ['save_pretrained', 'from_pretrained'], 'PreTrainedModel': ['save_pretrained', 'from_pretrained'], 'FeatureExtractionMixin': ['save_pretrained', 'from_pretrained']}.

'NoneType' object has no attribute 'width'

C:\Users\ZeroCool22\Desktop\UnstableFusion\UnstableFusion>python unstablefusion.py
'NoneType' object has no attribute 'width'
'NoneType' object has no attribute 'width'
Traceback (most recent call last):
  File "C:\Users\ZeroCool22\Desktop\UnstableFusion\UnstableFusion\unstablefusion.py", line 565, in mousePressEvent
    top_left = QPoint(e.pos().x() - self.selection_rectangle_size[0] / 2, e.pos().y() - self.selection_rectangle_size[1] / 2)
TypeError: arguments did not match any overloaded call:
  QPoint(): too many arguments
  QPoint(int, int): argument 1 has unexpected type 'float'
  QPoint(QPoint): argument 1 has unexpected type 'float'

C:\Users\ZeroCool22\Desktop\UnstableFusion\UnstableFusion>
2022-09-23.10-54-28.mp4

And just doing a Click on the Canvas, gives an error too and close it by itself.

2022-09-23.10-56-22.mp4

Given raise value error along with others

I am doing this with NO EXPERIENCE so pls just help get this working. I really wanna do ai art but this is getting annoying.

Traceback (most recent call last):
File "C:\Users\Fuck you microsoft\Documents\UnstableFusion-main\UnstableFusion.py", line 897, in handle_generate_button
if type(self.get_handler()) == ServerStableDiffusionHandler:
File "C:\Users\Fuck you microsoft\Documents\UnstableFusion-main\UnstableFusion.py", line 460, in get_handler
return self.stable_diffusion_manager.get_handler()
File "C:\Users\Fuck you microsoft\Documents\UnstableFusion-main\UnstableFusion.py", line 329, in get_handler
return self.get_local_handler(self.get_huggingface_token())
File "C:\Users\Fuck you microsoft\Documents\UnstableFusion-main\UnstableFusion.py", line 312, in get_local_handler
self.cached_local_handler = StableDiffusionHandler(token)
File "C:\Users\Fuck you microsoft\Documents\UnstableFusion-main\diffusionserver.py", line 36, in init
self.text2img = StableDiffusionPipeline.from_pretrained(
File "C:\Users\Fuck you microsoft\anaconda3\lib\site-packages\diffusers\pipeline_utils.py", line 516, in from_pretrained
raise ValueError(
ValueError: The component <class 'transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor'> of <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> cannot be loaded as it does not seem to have any of the loading methods defined in {'ModelMixin': ['save_pretrained', 'from_pretrained'], 'SchedulerMixin': ['save_config', 'from_config'], 'DiffusionPipeline': ['save_pretrained', 'from_pretrained'], 'OnnxRuntimeModel': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizer': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizerFast': ['save_pretrained', 'from_pretrained'], 'PreTrainedModel': ['save_pretrained', 'from_pretrained'], 'FeatureExtractionMixin': ['save_pretrained', 'from_pretrained']}.

[Linux] CUDA error upon trying to generate

OS: Arch Linux rolling
GPU: GTX 1660 SUPER
Driver: nvidia-520.56.06
CUDA: cuda-tools installed

Whenever I go to generate, it crashes on a CUDA error, as shown below. I have all dependencies listed plus a fair few others since it also pointed out I didn't have them. I'm using a local clone of v1.4 of the diffusion model renamed to v1.5 because the program wants to only accept a v1.5 folder despite it not being out for the public (as far as I can tell) and the HTTPX request fails when I go for the access key.

See terminal output below (the backslashes in the last line are to be ignored, interfered with code block):

$ python unstablefusion.py 
Generating with strength 0.75, steps 30, guidance_scale 7.5, seed 889492
  0%|                                                                                                                                                                 | 0/31 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/home/meteo/UnstableFusion/unstablefusion.py", line 912, in handle_generate_button
    image = self.get_handler().generate(prompt,
  File "/home/meteo/UnstableFusion/diffusionserver.py", line 114, in generate
    im = self.text2img(
  File "/home/meteo/.local/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/meteo/.local/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 326, in __call__
    noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
  File "/home/meteo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/meteo/.local/lib/python3.10/site-packages/diffusers/models/unet_2d_condition.py", line 296, in forward
    sample, res_samples = downsample_block(
  File "/home/meteo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/meteo/.local/lib/python3.10/site-packages/diffusers/models/unet_blocks.py", line 563, in forward
    hidden_states = attn(hidden_states, context=encoder_hidden_states)
  File "/home/meteo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/meteo/.local/lib/python3.10/site-packages/diffusers/models/attention.py", line 162, in forward
    hidden_states = block(hidden_states, context=context)
  File "/home/meteo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/meteo/.local/lib/python3.10/site-packages/diffusers/models/attention.py", line 213, in forward
    hidden_states = self.ff(self.norm3(hidden_states)) + hidden_states
  File "/home/meteo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/meteo/.local/lib/python3.10/site-packages/diffusers/models/attention.py", line 344, in forward
    return self.net(hidden_states)
  File "/home/meteo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/meteo/.local/lib/python3.10/site-packages/torch/nn/modules/container.py", line 139, in forward
    input = module(input)
  File "/home/meteo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/meteo/.local/lib/python3.10/site-packages/diffusers/models/attention.py", line 362, in forward
    hidden_states, gate = self.proj(hidden_states).chunk(2, dim=-1)
  File "/home/meteo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/meteo/.local/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling \`cublasGemmEx( handle, opa, opb, m, n, k, &falpha, a, CUDA_R_16F, lda, b, CUDA_R_16F, ldb, &fbeta, c, CUDA_R_16F, ldc, CUDA_R_32F, CUBLAS_GEMM_DFALT_TENSOR_OP)\`

Getting this error when I try to generate an image. I'm on arch linux.

Traceback (most recent call last):
File "/home/boi/.local/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 213, in hf_raise_for_status
response.raise_for_status()
File "/usr/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/fp16/model_index.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/boi/.local/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 228, in get_config_dict
config_file = hf_hub_download(
File "/home/boi/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1053, in hf_hub_download
metadata = get_hf_file_metadata(
File "/home/boi/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1359, in get_hf_file_metadata
hf_raise_for_status(r)
File "/home/boi/.local/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 254, in hf_raise_for_status
raise HfHubHTTPError(str(HTTPError), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: <class 'requests.exceptions.HTTPError'> (Request ID: Nk1158C9LHrkTH1ybJVBC)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/boi/UnstableFusion/unstablefusion.py", line 897, in handle_generate_button
if type(self.get_handler()) == ServerStableDiffusionHandler:
File "/home/boi/UnstableFusion/unstablefusion.py", line 460, in get_handler
return self.stable_diffusion_manager.get_handler()
File "/home/boi/UnstableFusion/unstablefusion.py", line 329, in get_handler
return self.get_local_handler(self.get_huggingface_token())
File "/home/boi/UnstableFusion/unstablefusion.py", line 312, in get_local_handler
self.cached_local_handler = StableDiffusionHandler(token)
File "/home/boi/UnstableFusion/diffusionserver.py", line 33, in init
self.text2img = StableDiffusionPipeline.from_pretrained(
File "/home/boi/.local/lib/python3.10/site-packages/diffusers/pipeline_utils.py", line 431, in from_pretrained
config_dict = cls.get_config_dict(
File "/home/boi/.local/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 260, in get_config_dict
raise EnvironmentError(
OSError: There was a specific connection error when trying to load runwayml/stable-diffusion-v1-5:
<class 'requests.exceptions.HTTPError'> (Request ID: Nk1158C9LHrkTH1ybJVBC)

Reimagine stopped working

Hey there,

Unfortunately reimagine element stopped working
File "D:\AI\UnstableFusion-main\unstablefusion.py", line 767, in handle_reimagine_button
reimagined_image = self.get_handler().reimagine(prompt,
TypeError: ServerStableDiffusionHandler.reimagine() got an unexpected keyword argument 'strength'

(using newest collab link + package from github)

Turn off Safety Check.

What i need to change here?

def dummy_safety_checker(self):
        def check(images, *args, **kwargs):
            return images, [False] * len(images)

For this?:

def dummy_safety_checker(self):
        def check(images, *args, **kwargs):
            return images, [True] * len(images)

Or there is something more i need to modify?

PD: I run it Locally, no on Collab.

No option to resize image view.

Hi,
I LOVE where this project is going, and I want to do my best to give useful feedback that moves the project forward.
I did notice, that when opening really large images, there is no option zoom out. It is simply locked to the window size. Could you maybe add that in?

Thanks!

Pipelines issue

Not too sure why this is happening. Everything installed accordingly but the "Generate" fetches 15 files, the GPU spins up and then I get the log below.
Both stable-diffusion-v1-4 and 1-5 have been cloned through Huggingface.co and User Access token is pasted in the application.
Do I need to edit something to point towards the .ckpt model of Stable-Diffusion 1.4?

Fetching 15 files: 100%|█████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 7505.91it/s]
The config attributes {'clip_sample': False} were passed to PNDMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
Traceback (most recent call last):
File "C:\AI\UnstableFusion\unstablefusion.py", line 897, in handle_generate_button
if type(self.get_handler()) == ServerStableDiffusionHandler:
File "C:\AI\UnstableFusion\unstablefusion.py", line 460, in get_handler
return self.stable_diffusion_manager.get_handler()
File "C:\AI\UnstableFusion\unstablefusion.py", line 329, in get_handler
return self.get_local_handler(self.get_huggingface_token())
File "C:\AI\UnstableFusion\unstablefusion.py", line 312, in get_local_handler
self.cached_local_handler = StableDiffusionHandler(token)
File "C:\AI\UnstableFusion\diffusionserver.py", line 36, in init
self.text2img = StableDiffusionPipeline.from_pretrained(
File "C:\Users\Jeff\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\pipeline_utils.py", line 516, in from_pretrained
raise ValueError(
ValueError: The component <class 'transformers.models.clip.image_processing_clip.CLIPImageProcessor'> of <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> cannot be loaded as it does not seem to have any of the loading methods defined in {'ModelMixin': ['save_pretrained', 'from_pretrained'], 'SchedulerMixin': ['save_config', 'from_config'], 'DiffusionPipeline': ['save_pretrained', 'from_pretrained'], 'OnnxRuntimeModel': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizer': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizerFast': ['save_pretrained', 'from_pretrained'], 'PreTrainedModel': ['save_pretrained', 'from_pretrained'], 'FeatureExtractionMixin': ['save_pretrained', 'from_pretrained']}.

error with the generation of an image

hello i have this error and i don't find how fix it:

Traceback (most recent call last):
  File "/home/roza/Documents/UnstableFusion/unstablefusion.py", line 897, in handle_generate_button
    if type(self.get_handler()) == ServerStableDiffusionHandler:
  File "/home/roza/Documents/UnstableFusion/unstablefusion.py", line 460, in get_handler
    return self.stable_diffusion_manager.get_handler()
  File "/home/roza/Documents/UnstableFusion/unstablefusion.py", line 329, in get_handler
    return self.get_local_handler(self.get_huggingface_token())
  File "/home/roza/Documents/UnstableFusion/unstablefusion.py", line 312, in get_local_handler
    self.cached_local_handler = StableDiffusionHandler(token)
  File "/home/roza/Documents/UnstableFusion/diffusionserver.py", line 36, in __init__
    self.text2img = StableDiffusionPipeline.from_pretrained(
  File "/home/roza/Documents/UnstableFusion/Unstablefusion/UF/lib/python3.10/site-packages/diffusers/pipeline_utils.py", line 516, in from_pretrained
    raise ValueError(
ValueError: The component <class 'transformers.models.clip.feature_extraction_clip.CLIPFeatureExtractor'> of <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> cannot be loaded as it does not seem to have any of the loading methods defined in {'ModelMixin': ['save_pretrained', 'from_pretrained'], 'SchedulerMixin': ['save_config', 'from_config'], 'DiffusionPipeline': ['save_pretrained', 'from_pretrained'], 'OnnxRuntimeModel': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizer': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizerFast': ['save_pretrained', 'from_pretrained'], 'PreTrainedModel': ['save_pretrained', 'from_pretrained'], 'FeatureExtractionMixin': ['save_pretrained', 'from_pretrained']}.

thx for your help

AssertionError: Torch not compiled with CUDA enabled

Hello,

Installed all dependencies using pip install -r requirements.txt but receive this error: AssertionError: Torch not compiled with CUDA enabled

I assume line 8 torch>=1.12.1 in the requirement.txt is incorrect and should be something like touch>=1.12.1=py38_cu113. Haven't tested this yet nor I'm I confident those are the correct versions for a cuda compiled version of torch. Just posting incase you have a quicker answer!

unexpected keyword argument 'serialized_options'

Traceback (most recent call last):
File "unstablefusion.py", line 7, in
from diffusionserver import StableDiffusionHandler
File "E:\AI\SD\SDUI\UnstableFusion-main\diffusionserver.py", line 4, in
from diffusers import StableDiffusionPipeline, StableDiffusionInpaintPipeline, StableDiffusionImg2ImgPipeline
File "e:\ai\koi-main\diffusers\src\diffusers_init_.py", line 18, in
from .pipelines import DDIMPipeline, DDPMPipeline, KarrasVePipeline, LDMPipeline, PNDMPipeline, ScoreSdeVePipeline
File "e:\ai\koi-main\diffusers\src\diffusers\pipelines_init_.py", line 11, in
from .latent_diffusion import LDMTextToImagePipeline
File "e:\ai\koi-main\diffusers\src\diffusers\pipelines\latent_diffusion_init_.py", line 6, in
from .pipeline_latent_diffusion import LDMBertModel, LDMTextToImagePipeline
File "e:\ai\koi-main\diffusers\src\diffusers\pipelines\latent_diffusion\pipeline_latent_diffusion.py", line 12, in
from transformers.modeling_utils import PreTrainedModel
File "E:\Anaconda3\envs\ldm\lib\site-packages\transformers\modeling_utils.py", line 79, in
from accelerate import dispatch_model, infer_auto_device_map, init_empty_weights
File "E:\Anaconda3\envs\ldm\lib\site-packages\accelerate_init_.py", line 7, in
from .accelerator import Accelerator
File "E:\Anaconda3\envs\ldm\lib\site-packages\accelerate\accelerator.py", line 33, in
from .tracking import LOGGER_TYPE_TO_CLASS, GeneralTracker, filter_trackers
File "E:\Anaconda3\envs\ldm\lib\site-packages\accelerate\tracking.py", line 29, in
from torch.utils import tensorboard
File "E:\Anaconda3\envs\ldm\lib\site-packages\torch\utils\tensorboard_init_.py", line 10, in
from .writer import FileWriter, SummaryWriter # noqa: F401
File "E:\Anaconda3\envs\ldm\lib\site-packages\torch\utils\tensorboard\writer.py", line 9, in
from tensorboard.compat.proto.event_pb2 import SessionLog
File "E:\Anaconda3\envs\ldm\lib\site-packages\tensorboard\compat\proto\event_pb2.py", line 17, in
from tensorboard.compat.proto import summary_pb2 as tensorboard_dot_compat_dot_proto_dot_summary__pb2
File "E:\Anaconda3\envs\ldm\lib\site-packages\tensorboard\compat\proto\summary_pb2.py", line 17, in
from tensorboard.compat.proto import tensor_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__pb2
File "E:\Anaconda3\envs\ldm\lib\site-packages\tensorboard\compat\proto\tensor_pb2.py", line 16, in
from tensorboard.compat.proto import resource_handle_pb2 as tensorboard_dot_compat_dot_proto_dot_resource__handle__pb2
File "E:\Anaconda3\envs\ldm\lib\site-packages\tensorboard\compat\proto\resource_handle_pb2.py", line 16, in
from tensorboard.compat.proto import tensor_shape_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__shape__pb2
File "E:\Anaconda3\envs\ldm\lib\site-packages\tensorboard\compat\proto\tensor_shape_pb2.py", line 18, in
DESCRIPTOR = _descriptor.FileDescriptor(
TypeError: init() got an unexpected keyword argument 'serialized_options'

Any plans to tag a release? asking for Flathub

Hi, I'm the packager of Sioyek for Flathub and noticed this other super cool project of yours. It would be nice to have it in Flathub too, but Flathub has a policy to only build from a tag (so not from a commit or branch tip). So two questions:

Are you ok with publishing UnstableFusion in www.flathub.org ?
Are you planning to tag a release for UnstableFusion ?

Thanks :-) ,

512x512 limite?

I think I saw in several repos there is some initial restriction on size and some repos that remove it?
Is there such a limit and if so why is it there?Is it possible to easily remove it?

Overall I want my width height to be increased more.

KeyError: 'image_size'

I'm receiving an error when I try to generate, reimagine, or inpaint. This with the webui up and it is being pointed to the server address.
when i click any of those options, I receive this error in the console,

size = resp_data['image_size']
KeyError: 'image_size'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.