Giter VIP home page Giter VIP logo

easydiffusion / easydiffusion Goto Github PK

View Code? Open in Web Editor NEW
9.1K 105.0 755.0 58.35 MB

Easiest 1-click way to create beautiful artwork on your PC using AI, with no tech knowledge. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image.

Home Page: https://easydiffusion.github.io/

License: Other

HTML 4.59% Python 13.81% Shell 0.93% Batchfile 1.08% PowerShell 0.01% CSS 7.07% JavaScript 71.56% NSIS 0.95%
art diffusion generative-art gui stable

easydiffusion's Introduction

Easy Diffusion 3.0

The easiest way to install and use Stable Diffusion on your computer.

Does not require technical knowledge, does not require pre-installed software. 1-click install, powerful features, friendly community.

️‍🔥🎉 New! Support for SDXL, ControlNet, multiple LoRA files, embeddings (and a lot more) have been added!

Installation guide | Troubleshooting guide | User guide | Discord Server (for support queries, and development discussions)


262597678-11089485-2514-4a11-88fb-c3acc81fc9ec

Installation

Click the download button for your operating system:

Hardware requirements:

  • Windows: NVIDIA graphics card¹ (minimum 2 GB RAM), or run on your CPU.
  • Linux: NVIDIA¹ or AMD² graphics card (minimum 2 GB RAM), or run on your CPU.
  • Mac: M1 or M2, or run on your CPU.
  • Minimum 8 GB of system RAM.
  • Atleast 25 GB of space on the hard disk.

¹) CUDA Compute capability level of 3.7 or higher required.

²) ROCm 5.2 support required.

The installer will take care of whatever is needed. If you face any problems, you can join the friendly Discord community and ask for assistance.

On Windows:

  1. Run the downloaded Easy-Diffusion-Windows.exe file.
  2. Run Easy Diffusion once the installation finishes. You can also start from your Start Menu, or from your desktop (if you created a shortcut).

If Windows SmartScreen prevents you from running the program click More info and then Run anyway.

Tip: On Windows 10, please install at the top level in your drive, e.g. C:\EasyDiffusion or D:\EasyDiffusion. This will avoid a common problem with Windows 10 (file path length limits).

On Linux/Mac:

  1. Unzip/extract the folder easy-diffusion which should be in your downloads folder, unless you changed your default downloads destination.
  2. Open a terminal window, and navigate to the easy-diffusion directory.
  3. Run ./start.sh (or bash start.sh) in a terminal.

To remove/uninstall:

Just delete the EasyDiffusion folder to uninstall all the downloaded packages.


Easy for new users, powerful features for advanced users

Features:

User experience

  • Hassle-free installation: Does not require technical knowledge, does not require pre-installed software. Just download and run!
  • Clutter-free UI: A friendly and simple UI, while providing a lot of powerful features.
  • Task Queue: Queue up all your ideas, without waiting for the current task to finish.
  • Intelligent Model Detection: Automatically figures out the YAML config file to use for the chosen model (via a models database).
  • Live Preview: See the image as the AI is drawing it.
  • Image Modifiers: A library of modifier tags like "Realistic", "Pencil Sketch", "ArtStation" etc. Experiment with various styles quickly.
  • Multiple Prompts File: Queue multiple prompts by entering one prompt per line, or by running a text file.
  • Save generated images to disk: Save your images to your PC!
  • UI Themes: Customize the program to your liking.
  • Searchable models dropdown: organize your models into sub-folders, and search through them in the UI.

Powerful image generation

  • Supports: "Text to Image", "Image to Image" and "InPainting"
  • ControlNet: For advanced control over the image, e.g. by setting the pose or drawing the outline for the AI to fill in.
  • 16 Samplers: PLMS, DDIM, DEIS, Heun, Euler, Euler Ancestral, DPM2, DPM2 Ancestral, LMS, DPM Solver, DPM++ 2s Ancestral, DPM++ 2m, DPM++ 2m SDE, DPM++ SDE, DDPM, UniPC.
  • Stable Diffusion XL and 2.1: Generate higher-quality images using the latest Stable Diffusion XL models.
  • Textual Inversion Embeddings: For guiding the AI strongly towards a particular concept.
  • Simple Drawing Tool: Draw basic images to guide the AI, without needing an external drawing program.
  • Face Correction (GFPGAN)
  • Upscaling (RealESRGAN)
  • Loopback: Use the output image as the input image for the next image task.
  • Negative Prompt: Specify aspects of the image to remove.
  • Attention/Emphasis: + in the prompt increases the model's attention to enclosed words, and - decreases it. E.g. apple++ falling from a tree.
  • Weighted Prompts: Use weights for specific words in your prompt to change their importance, e.g. (red)2.4 (dragon)1.2.
  • Prompt Matrix: Quickly create multiple variations of your prompt, e.g. a photograph of an astronaut riding a horse | illustration | cinematic lighting.
  • Prompt Set: Quickly create multiple variations of your prompt, e.g. a photograph of an astronaut on the {moon,earth}
  • 1-click Upscale/Face Correction: Upscale or correct an image after it has been generated.
  • Make Similar Images: Click to generate multiple variations of a generated image.
  • NSFW Setting: A setting in the UI to control NSFW content.
  • JPEG/PNG/WEBP output: Multiple file formats.

Advanced features

  • Custom Models: Use your own .ckpt or .safetensors file, by placing it inside the models/stable-diffusion folder!
  • Stable Diffusion XL and 2.1 support
  • Merge Models
  • Use custom VAE models
  • Textual Inversion Embeddings
  • ControlNet
  • Use custom GFPGAN models
  • UI Plugins: Choose from a growing list of community-generated UI plugins, or write your own plugin to add features to the project!

Performance and security

  • Fast: Creates a 512x512 image with euler_a in 5 seconds, on an NVIDIA 3060 12GB.
  • Low Memory Usage: Create 512x512 images with less than 2 GB of GPU RAM, and 768x768 images with less than 3 GB of GPU RAM!
  • Use CPU setting: If you don't have a compatible graphics card, but still want to run it on your CPU.
  • Multi-GPU support: Automatically spreads your tasks across multiple GPUs (if available), for faster performance!
  • Auto scan for malicious models: Uses picklescan to prevent malicious models.
  • Safetensors support: Support loading models in the safetensor format, for improved safety.
  • Auto-updater: Gets you the latest improvements and bug-fixes to a rapidly evolving project.
  • Developer Console: A developer-mode for those who want to modify their Stable Diffusion code, modify packages, and edit the conda environment.

(and a lot more)


Easy for new users, powerful features for advanced users:

image

Task Queue

Screenshot of task queue


How to use?

Please refer to our guide to understand how to use the features in this UI.

Bugs reports and code contributions welcome

If there are any problems or suggestions, please feel free to ask on the discord server or file an issue.

If you have any code contributions in mind, please feel free to say Hi to us on the discord server. We use the Discord server for development-related discussions, and for helping users.

Credits

Disclaimer

The authors of this project are not responsible for any content generated using this interface.

The license of this software forbids you from sharing any content that:

  • Violates any laws.
  • Produces any harm to a person or persons.
  • Disseminates (spreads) any personal information that would be meant for harm.
  • Spreads misinformation.
  • Target vulnerable groups.

For the full list of restrictions please read the License. You agree to these terms by using this software.

easydiffusion's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

easydiffusion's Issues

Support string interpolation and looping to run many different prompts

Imagine having the ability to write

A {village | city | town | space station} in the background of an lush forest

And then following prompts would be used:

A village in the background of an lush forest
A city in the background of an lush forest
A town in the background of an lush forest
A space station in the background of an lush forest

This would be incredibly powerful, especially with multiple spots, like:

A {village | city | town | space station} in the background of an lush forest and a {monster | kitten} attacking

Compose file '/docker-compose' invalid because 'devices' was unexpected

I'm fumbling through my WSL2/docker setup.

I think I've got everything right with the GPU, as it detects, but when I get to docker-compose up, it is throwing this:

ERROR: The Compose file './docker-compose.yml' is invalid because:
services.stability-ai.deploy.resources.reservations value Additional properties are not allowed ('devices' was unexpected)

Thanks for your work and help!

ModuleNotFoundError: No module named 'cv2'

python is installed and updated and so is opencv

The following is the output:

"Ready to rock!"

started in  C:\Users\adama\Documents\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion
←[32mINFO←[0m:     Started server process [←[36m16544←[0m]
←[32mINFO←[0m:     Waiting for application startup.
←[32mINFO←[0m:     Application startup complete.
←[32mINFO←[0m:     Uvicorn running on ←[1mhttp://127.0.0.1:9000←[0m (Press CTRL+C to quit)
Traceback (most recent call last):
  File "C:\Users\adama\Documents\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\server.py", line 56, in ping
    from sd_internal import runtime
  File "C:\Users\atomica\Documents\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\sd_internal\runtime.py", line 2, in <module>
    import cv2
ModuleNotFoundError: No module named 'cv2'

←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /favicon.ico HTTP/1.1←[0m" ←[31m404 Not Found←[0m
←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m:     127.0.0.1:52859 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m

Website doesn't show

I've had some troubles executing server.sh in WSL, it said something about no permission even with sudo, but with some chmod magic I got that working eventually. After executing it, docker showed that it is using port 5000 instead of 8000 like shown in the Tutorial.

When opening localhost:5000 in a browser, all the site contains is

{"docs_url":"/docs","openapi_url":"/openapi.json"}

start_server() in ./server not working

I am not familiar with shell script, but I think line 10 in server did not run. I can run docker-compose up stability-ai stable-diffusion-ui in console manually.

Exception in ASGI application

First time run of this... I have a laptop Nvidia 3060 GPU, running Ubuntu in WSL on Windows 10. I tried my first prompt from the web page but got this error below. I didn't install the Nvidia driver within Ubuntu because a) it didn't recognise my GPU and b) I had all sorts of other problems. Do I need to install the Nvidia driver within WSL, or does it use the host driver?

sd     | ERROR:    Exception in ASGI application
sd     | Traceback (most recent call last):
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
sd     |     result = await app(self.scope, self.receive, self.send)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
sd     |     return await self.app(scope, receive, send)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/applications.py", line 269, in __call__
sd     |     await super().__call__(scope, receive, send)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__
sd     |     await self.middleware_stack(scope, receive, send)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
sd     |     raise exc
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
sd     |     await self.app(scope, receive, _send)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/exceptions.py", line 93, in __call__
sd     |     raise exc
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/exceptions.py", line 82, in __call__
sd     |     await self.app(scope, receive, sender)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
sd     |     raise e
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
sd     |     await self.app(scope, receive, send)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 670, in __call__
sd     |     await route.handle(scope, receive, send)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 266, in handle
sd     |     await self.app(scope, receive, send)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 65, in app
sd     |     response = await func(request)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/routing.py", line 227, in app
sd-ui  | INFO:     172.18.0.1:59682 - "POST /image HTTP/1.1" 500 Internal Server Error
sd     |     raw_response = await run_endpoint_function(
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/routing.py", line 162, in run_endpoint_function
sd     |     return await run_in_threadpool(dependant.call, **values)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
sd     |     return await anyio.to_thread.run_sync(func, *args)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
sd     |     return await get_asynclib().run_sync_in_worker_thread(
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
sd     |     return await future
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
sd     |     result = context.run(func, *args)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/cog/server/http.py", line 79, in predict
sd     |     output = predictor.predict(**request.input.dict())
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
sd     |     return func(*args, **kwargs)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
sd     |     return func(*args, **kwargs)
sd     |   File "/src/predict.py", line 88, in predict
sd     |     output = self.pipe(
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
sd     |     return func(*args, **kwargs)
sd     |   File "/src/image_to_image.py", line 156, in __call__
sd     |     noise_pred = self.unet(
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
sd     |     return forward_call(*input, **kwargs)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/unet_2d_condition.py", line 168, in forward
sd     |     sample = upsample_block(
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
sd     |     return forward_call(*input, **kwargs)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/unet_blocks.py", line 1037, in forward
sd     |     hidden_states = attn(hidden_states, context=encoder_hidden_states)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
sd     |     return forward_call(*input, **kwargs)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/attention.py", line 168, in forward
sd     |     x = block(x, context=context)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
sd     |     return forward_call(*input, **kwargs)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/attention.py", line 196, in forward
sd     |     x = self.attn1(self.norm1(x)) + x
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
sd     |     return forward_call(*input, **kwargs)
sd     |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/models/attention.py", line 254, in forward
sd     |     attn = sim.softmax(dim=-1)
sd     | RuntimeError: CUDA error: unknown error
sd     | CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
sd     | For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

ModuleNotFoundError: No module named 'torch'

I installed and ran v2 on Windows using Start Stable Diffusion UI.cmd, and encountered an error running the server:

started in  C:\Users\myuser\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion
←[32mINFO←[0m:     Started server process [←[36m12336←[0m]
←[32mINFO←[0m:     Waiting for application startup.
←[32mINFO←[0m:     Application startup complete.
←[32mINFO←[0m:     Uvicorn running on ←[1mhttp://127.0.0.1:9000←[0m (Press CTRL+C to quit)
←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /modifiers.json HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /output_dir HTTP/1.1←[0m" ←[32m200 OK←[0m
Traceback (most recent call last):
  File "C:\Users\myuser\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\server.py", line 63, in ping
    from sd_internal import runtime
  File "C:\Users\myuser\stable-diffusion-ui\stable-diffusion-ui\stable-diffusion\..\ui\sd_internal\runtime.py", line 2, in <module>
    import torch
ModuleNotFoundError: No module named 'torch'

←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m:     127.0.0.1:51205 - "←[1mGET /ping HTTP/1.1←[0m" ←[32m200 OK←[0m

Is anyone else getting this?

[Suggestion] Queue stacks

Instead of replacing the result image with the new, shift the old image to the left and put the new image to the right and add a display settings button below or on each image.
If an image was used as a source, but that image and resulting images on a new row (or column).

First row (or column) is images without a source image, the preceding rows (or columns) uses the source image as header and resulting images as children.

Possibly also allow push future image-settings to the stack that will be fetched once the current it is ready. Workaround right now is to have multiple browser tabs.

ERR_EMPTY_RESPONSE on port 9000

I can't reach the UI after update. Port 8000 works fine, it displays the rodiet notice, but port 9000 returns nothing at all.
I'm running Windows, so I was not (easily) able to execute the server file. But I opened it and executed the code below (start_server()) as a troubleshoot. Without luck though.

docker-compose up -d stable-diffusion-old-port-redirect
docker-compose up stability-ai stable-diffusion-ui

All files in the root of stable-diffusion-ui\stable-diffusion deleted

Steps to reproduce.

  1. cd C:\stable-diffusion-ui\
  2. run: scripts\Start Stable Diffusion UI.cmd
  3. wait till it is running.
  4. press ctrl-c in the console
  5. click N
  6. prompt "press any key to continue" (Files still there at this stage)
  7. Press any key.
  8. All files in stable-diffusion-ui\stable-diffusion gone.

Choosing Y in step 5 will not delete the files.

Getting some errors starting up the container

Creating network "stable-diffusion-ui_default" with the default driver
Creating sd ... error
Creating sd-old-port-redirect ...

ERROR: for sd Cannot start service stability-ai: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error Creating sd-old-port-redirect ... done

ERROR: for stability-ai Cannot start service stability-ai: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: signal: segmentation fault, stdout: , stderr:: unknown
ERROR: Encountered errors while bringing up the project.

Installing Stable Diffusion on Linux Mint Error

I tried Installing Stable Diffusion V2 on Linux Mint again & again and here's what Error I got. I'm a BIG Noob so explain like I'm Five!


Stable Diffusion UI

Stable Diffusion UI's git repository was already installed. Updating..
HEAD is now at 051ef56 Merge pull request #79 from iJacqu3s/patch-1
Already up to date.
Stable Diffusion's git repository was already installed. Updating..
HEAD is now at c56b493 Merge pull request #117 from neonsecret/basujindal_attn
Already up to date.

Downloading packages necessary for Stable Diffusion..

***** This will take some time (depending on the speed of the Internet connection) and may appear to be stuck, but please be patient ***** ..

WARNING: A space was detected in your requested environment path
'/home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/stable-diffusion/env'
Spaces in paths can sometimes be problematic.
Collecting package metadata (repodata.json): done
Solving environment: done
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
ERROR conda.core.link:_execute(730): An error occurred while installing package 'defaults::cudatoolkit-11.3.1-h2bc3f7f_2'.
Rolling back transaction: done

LinkError: post-link script failed for package defaults::cudatoolkit-11.3.1-h2bc3f7f_2
location of failed script: /home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/stable-diffusion/env/bin/.cudatoolkit-post-link.sh
==> script messages <==

==> script output <==
stdout:
stderr: Traceback (most recent call last):
File "/home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/installer/bin/conda", line 12, in
from conda.cli import main
ModuleNotFoundError: No module named 'conda'
Traceback (most recent call last):
File "/home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/installer/bin/conda", line 12, in
from conda.cli import main
ModuleNotFoundError: No module named 'conda'
/home/sharky/Desktop/Stable Diffusion/stable-diffusion-ui/stable-diffusion/env/bin/.cudatoolkit-post-link.sh: line 3: $PREFIX/.messages.txt: ambiguous redirect

return code: 1

()

Error installing the packages necessary for Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues


Hope this helps someone!

"Potential NSFW content" on the default prompt.

Configuration:

Windows 11
CPU: AMD Ryzen 5 5600X
Memory: 64GB
WSL2 + ubuntu 22.04.1
GPU: GeForce GTX 1660 SUPER
GPU Memory: 6GB

docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
GPU Device 0: "Turing" with compute capability 7.5

> Compute 7.5 CUDA device: [NVIDIA GeForce GTX 1660 SUPER]
22528 bodies, total time for 10 iterations: 32.767 ms
= 154.884 billion interactions per second
= 3097.676 single-precision GFLOP/s at 20 flops per interaction

Error message:

sd                                    | Using seed: 922
50it [00:32,  1.56it/s]               |
sd                                    | Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed.
sd                                    | INFO:     172.18.0.4:36142 - "POST /predictions HTTP/1.1" 500 Internal Server Error
sd                                    | ERROR:    Exception in ASGI application
sd                                    | Traceback (most recent call last):
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
sd                                    |     result = await app(self.scope, self.receive, self.send)
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
sd                                    |     return await self.app(scope, receive, send)
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/applications.py", line 269, in __call__
sd                                    |     await super().__call__(scope, receive, send)
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__
sd                                    |     await self.middleware_stack(scope, receive, send)
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__
sd                                    |     raise exc
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__
sd                                    |     await self.app(scope, receive, _send)
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/exceptions.py", line 93, in __call__
sd                                    |     raise exc
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/exceptions.py", line 82, in __call__
sd                                    |     await self.app(scope, receive, sender)
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
sd                                    |     raise e
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
sd                                    |     await self.app(scope, receive, send)
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 670, in __call__
sd                                    |     await route.handle(scope, receive, send)
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 266, in handle
sd                                    |     await self.app(scope, receive, send)
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/routing.py", line 65, in app
sd                                    |     response = await func(request)
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/routing.py", line 227, in app
sd                                    |     raw_response = await run_endpoint_function(
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/fastapi/routing.py", line 162, in run_endpoint_function
sd                                    |     return await run_in_threadpool(dependant.call, **values)
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool
sd                                    |     return await anyio.to_thread.run_sync(func, *args)
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
sd                                    |     return await get_asynclib().run_sync_in_worker_thread(
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
sd                                    |     return await future
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
sd                                    |     result = context.run(func, *args)
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/cog/server/http.py", line 79, in predict
sd                                    |     output = predictor.predict(**request.input.dict())
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
sd                                    |     return func(*args, **kwargs)
sd                                    |   File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
sd                                    |     return func(*args, **kwargs)
sd                                    |   File "/src/predict.py", line 113, in predict
sd                                    |     raise Exception("NSFW content detected, please try a different prompt")
sd                                    | Exception: NSFW content detected, please try a different prompt
sd-ui                                 | INFO:     172.18.0.1:34184 - "POST /image HTTP/1.1" 500 Internal Server Error

I get the error with the default prompt: "a photograph of an astronaut riding a horse"
I tried with 256x256 image size, and I get the same error.

Python errors

[1] 82
root@DESKTOP-J6VL1VP:/home/AA2# Traceback (most recent call last):
  File "urllib3/connectionpool.py", line 677, in urlopen
  File "urllib3/connectionpool.py", line 392, in _make_request
  File "http/client.py", line 1277, in request
  File "http/client.py", line 1323, in _send_request
  File "http/client.py", line 1272, in endheaders
  File "http/client.py", line 1032, in _send_output
  File "http/client.py", line 972, in send
  File "docker/transport/unixconn.py", line 43, in connect
FileNotFoundError: [Errno 2] No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "requests/adapters.py", line 449, in send
  File "urllib3/connectionpool.py", line 727, in urlopen
  File "urllib3/util/retry.py", line 410, in increment
  File "urllib3/packages/six.py", line 734, in reraise
  File "urllib3/connectionpool.py", line 677, in urlopen
  File "urllib3/connectionpool.py", line 392, in _make_request
  File "http/client.py", line 1277, in request
  File "http/client.py", line 1323, in _send_request
  File "http/client.py", line 1272, in endheaders
  File "http/client.py", line 1032, in _send_output
  File "http/client.py", line 972, in send
  File "docker/transport/unixconn.py", line 43, in connect
urllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "docker/api/client.py", line 214, in _retrieve_server_version
  File "docker/api/daemon.py", line 181, in version
  File "docker/utils/decorators.py", line 46, in inner
  File "docker/api/client.py", line 237, in _get
  File "requests/sessions.py", line 543, in get
  File "requests/sessions.py", line 530, in request
  File "requests/sessions.py", line 643, in send
  File "requests/adapters.py", line 498, in send
requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "docker-compose", line 3, in <module>
  File "compose/cli/main.py", line 81, in main
  File "compose/cli/main.py", line 200, in perform_command
  File "compose/cli/command.py", line 70, in project_from_options
  File "compose/cli/command.py", line 153, in get_project
  File "compose/cli/docker_client.py", line 43, in get_client
  File "compose/cli/docker_client.py", line 170, in docker_client
  File "docker/api/client.py", line 197, in __init__
  File "docker/api/client.py", line 222, in _retrieve_server_version
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
[83] Failed to execute script docker-compose

Windows 10, latest update, using WSL Ubuntu, I have installed everything needed.
obraz_2022-08-31_101823176

The same thing happens even when I'm not in Sudo

using gradio for the UI

Hi, thanks for making this, there are some nice examples of using gradio for the UI, web demo hosted here: https://huggingface.co/CompVis/stable-diffusion

colab note with img2img: https://colab.research.google.com/drive/1NfgqublyT_MWtR5CsmrgmdnkWiijF3P3?usp=sharing

and web ui for stable diffusion runs locally (includes gfpgan/realesrgan and alot of other features): https://github.com/hlky/stable-diffusion-webui

would be great if this repo can also support a gradio UI for a easy to use user experience
gradio github: https://github.com/gradio-app/gradio

doesn't work at all on win 10

hello

last release doesn't work at all, if I double click on Start Stable Diffusion UI.cmd the terminal opens for a fraction of a second then nothing. if I run installer\scripts\activate.bat from cmd.exe without administrator privileges I get \Common was unexpected and a list of folders in my c:\ drive, same if I run cmd.exe as an administrator with the difference that the message before the list of folders on my hard drive is \Nmap" was unexpected.

stable-diffusion-ui folder is in C:\ and or in D:\

Error downloading Stable Diffusion UI. Please try re-running this installer.

My machine is a macBook pro, Intel chip. After downloading, moving to the directory and running:

./start.sh

I see this:

/Users/myName/stable-diffusion-ui/installer/bin/python: /Users/myName/stable-diffusion-ui/installer/bin/python: cannot execute binary file


Stable Diffusion UI



Downloading Stable Diffusion UI..

scripts/on_env_start.sh: line 15: /Users/myName/stable-diffusion-ui/installer/bin/git: cannot execute binary file


Error downloading Stable Diffusion UI. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues

Anyone having the same issue?

Permission to push branch to make PR

Hi! I've made a branch where I've enabled the user to specify number of batches to create for input prompt and also given option to disable sound. I'd like to make a PR for this but don't have permissions to push my local branch. :) Are you interested in this?

Progress bar and Stop button

Ideas from the Discord group https://discord.com/invite/u9yhsFmEkB

johnpccd —
Also a visible progress bar in the UI would be nice.. or at least an option to see the console output inside the UI instead of having to switch all the time between the browser and the console

johnpccd —
Maybe a stop option as well?

stability-ai: could not select device driver "" with capabilities [[gpu]]

root@wolf-run:/home/wolf/Downloads/stable-diffusion-ui# sudo docker-compose up &
[1] 6146
Starting sd ... error

ERROR: for sd Cannot start service stability-ai: could not select device driver "" with capabilities: [[gpu]]

ERROR: for stability-ai Cannot start service stability-ai: could not select device driver "" with capabilities: [[gpu]]
ERROR: Encountered errors while bringing up the project.

Apologies if this is simply the GPU issue, however I did try to follow that only for this to persist. Thanks!

'OSError: [Errno 22] Invalid argument' Avoid reserved characters in Windows file names

When using the option 'Automatically save to' with Windows reserved file name characters in a prompt (double quote in my case: "), it doesn't save the .png and .txt file and throws this error:

OSError: [Errno 22] Invalid argument: 'C:\\R\\cmdr2\\1f6d5759\\"Javier_Bardem"_Gta_vice_city_gta_5_cover_art_bord_25617819.png'

The following reserved characters:

< (less than)
> (greater than)
: (colon)
" (double quote)
/ (forward slash)
\ (backslash)
| (vertical bar or pipe)
? (question mark)
* (asterisk)

https://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-file#naming-conventions

Error when using docker-compose up &

I keep getting this error and I can't fix it.
[it's an iMac 27" intel]

///////////////////////

(base) imac27 ~ % cd /Users/imac27/Desktop/stable-diffusion-ui-main
(base) imac27 stable-diffusion-ui-main % docker-compose up &
[1] 22725
(base) imac27 stable-diffusion-ui-main % WARNING: Found orphan containers (sd-old-port-redirect) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Starting sd ... error

ERROR: for sd Cannot start service stability-ai: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown

ERROR: for stability-ai Cannot start service stability-ai: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown
ERROR: Encountered errors while bringing up the project.

[1] + exit 1 docker-compose up
(base) imac27 stable-diffusion-ui-main % --remove-orphan
zsh: command not found: --remove-orphan
(base) imac27 stable-diffusion-ui-main %

////////////////////////

Image Browser / Gallery

I like the idea of an image browser for the images generated using the tool, which displays their metadata, with maybe a button to re-generate a particular old image (by setting the UI with that image's config).

#29 (comment)

Linux fail to start uvicorn

I think i found a error in on_sd_start.sh that makes fail the uvicorn init.
Tried on my debian 11 Server, installation goes good til the script is called, on line 78 is called the uvicorn server start it raise a trace back because no module fastapi is found.
The path from the traceback error results to be:

"$INSTALL_DIR/stable-diffusion-ui/stable-diffusion/../ui"

In fact on line 16 the script move inside stable-diffusion, then on line 76 is defined a directory variable:

export SD_UI_PATH=pwd/../ui

but it would save:

"$CURRENT_DIR_PATH/../ui"

instead of the intended:

"$CURRENT_DIR_PATH/ui"

that is where server.py is saved.

A simple solution should be add a cd command before exporting the variable:

cd ..

export SD_UI_PATH=pwd/ui

I tried to fix locally but on_sd_start.sh is pulled from repository at init, and launching it alone make other things fail. I hope to be helpfull, congrats for the beautiful work.

RuntimeError: Unable to find a valid cuDNN algorithm to run convolution

Installation was successful, UI works, but when I try to generate an image I get the error

RuntimeError: Unable to find a valid cuDNN algorithm to run convolution

Here is the output:

Traceback (most recent call last): File "C:\stable-diffusion-ui\stable-diffusion\..\ui\server.py", line 85, in image res: Response = runtime.mk_img(r) File "C:\stable-diffusion-ui\stable-diffusion\..\ui\sd_internal\runtime.py", line 121, in mk_img x_samples = _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, None, opt_C, opt_f, opt_ddim_eta, c, uc) File "C:\stable-diffusion-ui\stable-diffusion\..\ui\sd_internal\runtime.py", line 147, in _txt2img samples_ddim, _ = sampler.sample(S=opt_ddim_steps, File "C:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "c:\stable-diffusion-ui\stable-diffusion\ldm\models\diffusion\plms.py", line 97, in sample samples, intermediates = self.plms_sampling(conditioning, size, File "C:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "c:\stable-diffusion-ui\stable-diffusion\ldm\models\diffusion\plms.py", line 152, in plms_sampling outs = self.p_sample_plms(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, File "C:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "c:\stable-diffusion-ui\stable-diffusion\ldm\models\diffusion\plms.py", line 218, in p_sample_plms e_t = get_model_output(x, t) File "c:\stable-diffusion-ui\stable-diffusion\ldm\models\diffusion\plms.py", line 185, in get_model_output e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2) File "c:\stable-diffusion-ui\stable-diffusion\ldm\models\diffusion\ddpm.py", line 987, in apply_model x_recon = self.model(x_noisy, t, **cond) File "C:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "c:\stable-diffusion-ui\stable-diffusion\ldm\models\diffusion\ddpm.py", line 1410, in forward out = self.diffusion_model(x, t, context=cc) File "C:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "c:\stable-diffusion-ui\stable-diffusion\ldm\modules\diffusionmodules\openaimodel.py", line 732, in forward h = module(h, emb, context) File "C:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "c:\stable-diffusion-ui\stable-diffusion\ldm\modules\diffusionmodules\openaimodel.py", line 87, in forward x = layer(x) File "C:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "C:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\nn\modules\conv.py", line 447, in forward return self._conv_forward(input, self.weight, self.bias) File "C:\stable-diffusion-ui\stable-diffusion\env\lib\site-packages\torch\nn\modules\conv.py", line 443, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Unable to find a valid cuDNN algorithm to run convolution

Better scheme for saving images to the disk

There's a good PR for this #29, and the current approach saves by using the prompt for the folder name. The current scheme I've implemented isn't very good, because prompts change a lot during a single session, and it's annoying to split across different folders.

So #29 has a good suggestion, and there are some suggestions on Discord. Starting this issue to track the suggestions and arrive at a design that's better than the current one.

ModuleNotFoundError: No module named 'ldm'

I installed stable-diffussion-ui (v2) yesterday and it worked first time, no problems. I generated loads of images.

I've just run it again today and I get this exception which stops images from being generated:

conda 4.14.0
git version 2.34.1.windows.1

(installer) C:\stable-diffusion-ui\installer\etc\conda\activate.d>cd C:\stable-diffusion-ui\installer\..\scripts

(installer) C:\stable-diffusion-ui\scripts>on_env_start.bat

"Stable Diffusion UI"

"Stable Diffusion UI's git repository was already installed. Updating.."
HEAD is now at 90c4361 Update README.md
Already up to date.
sd-ui-files\ui\index.html
sd-ui-files\ui\modifiers.json
sd-ui-files\ui\server.py
sd-ui-files\ui\media\ding.mp3
sd-ui-files\ui\sd_internal\runtime.py
sd-ui-files\ui\sd_internal\__init__.py
6 File(s) copied
sd-ui-files\scripts\on_env_start.bat
sd-ui-files\scripts\on_env_start.sh
sd-ui-files\scripts\on_sd_start.bat
sd-ui-files\scripts\on_sd_start.sh
sd-ui-files\scripts\post_activate.bat
sd-ui-files\scripts\post_activate.sh
sd-ui-files\scripts\Start Stable Diffusion UI.cmd
sd-ui-files\scripts\start.sh
sd-ui-files\scripts\win_enable_long_filepaths.ps1
9 File(s) copied
"Stable Diffusion's git repository was already installed. Updating.."
HEAD is now at 1857272 Create FUNDING.yml
Already up to date.
"Packages necessary for Stable Diffusion were already installed"
"Packages necessary for Stable Diffusion UI were already installed"
"Data files (weights) necessary for Stable Diffusion were already downloaded"

"Stable Diffusion is ready!"

started in  C:\stable-diffusion-ui\stable-diffusion
←[32mINFO←[0m:     Started server process [←[36m3928←[0m]
←[32mINFO←[0m:     Waiting for application startup.
←[32mINFO←[0m:     Application startup complete.
←[32mINFO←[0m:     Uvicorn running on ←[1mhttp://[0.0.0.0:9000](http://0.0.0.0:9000/)←[0m (Press CTRL+C to quit)
←[32mINFO←[0m:     [127.0.0.1:64582](http://127.0.0.1:64582/) - "←[1mGET / HTTP/1.1←[0m" ←[32m200 OK←[0m
←[32mINFO←[0m:     [127.0.0.1:64582](http://127.0.0.1:64582/) - "←[1mGET /output_dir HTTP/1.1←[0m" ←[32m200 OK←[0m
Traceback (most recent call last):
  File "C:\stable-diffusion-ui\stable-diffusion\..\ui\server.py", line 63, in ping
    from sd_internal import runtime
  File "C:\stable-diffusion-ui\stable-diffusion\..\ui\sd_internal\runtime.py", line 14, in <module>
    from ldm.util import instantiate_from_config
ModuleNotFoundError: No module named 'ldm'

The web UI works fine, just not the image generation.

Outpainting doesn't work

I tried to generate an up-scaled image by placing a right piece of a pre-generated image on the left, hoping the model will fill the rest.

I used this image:
image.

I tried putting mask with black on the left, on the right, and without the mask at all, but all of the options only generate colorful images in the left side of the image, keeping the right side fully black, like this:

image

cannot start up docker container

build was successful
using windows 10, docker-compose version 1.29.2, build 5becea4c

after running docker-compose up

Starting sd ... error

ERROR: for sd  Cannot start service stability-ai: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: signal: segmentation fault, stdout: , stderr:: unknown

ERROR: for stability-ai  Cannot start service stability-ai: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: signal: segmentation fault, stdout: , stderr:: unknown
ERROR: Encountered errors while bringing up the project.

Docker-compose is deprecated

Minor issue with installation instructions: docker-compose is (supposedly) deprecated in favor of docker compose ("v2").

On my system (ubuntu 22.04), docker-compose installed from official ppa is of insufficient version 1.25.4, while docker compose returns v2.6.0. This means that docker-compose up& doesn't work, while docker compose up& does.

Erro installing in Ubuntu linux under Mac environment via Virtual box

InvalidArchiveError("Error with archive /home/jimq/stable-diffusion-ui/installer/pkgs/cudatoolkit-11.3.1-h2bc3f7f_2/.cph_tmphordd_nn/pkg-cudatoolkit-11.3.1-h2bc3f7f_2.tar.zst. You probably need to delete and re-download or re-create this file. Message from libarchive was:\n\nFailed to create dir 'bin'")

Error installing the packages necessary for Stable Diffusion. Please try re-running this installer. If it doesn't work, please copy the messages in this window, and ask the community at https://discord.com/invite/u9yhsFmEkB or file an issue at https://github.com/cmdr2/stable-diffusion-ui/issues

Thank you

Variation of Image but in style of another image

I have not yet loaded this tool (waiting for mac version). Based on the UI screenshot I see that it's possible to specify an artist as a style influence, but what about providing an image as the style influence?

Ideally this would be possible for both prompt mode and variation (of provided image) mode.

Version 2 - Development

A development version of v2 is available for Windows 10/11 and Linux. Experimental support for Mac will be added soon.

The instructions for installing are at: https://github.com/cmdr2/stable-diffusion-ui/blob/v2/README.md#installation

It is not a binary, and the source code used for building this is open at https://github.com/cmdr2/stable-diffusion-ui/tree/v2

What is this?

This version is a 1-click installer. You don't need WSL or Docker or Python or anything beyond a working NVIDIA GPU with an updated driver. You don't need to use the command-line at all.

It'll download the necessary files from the original Stable Diffusion git repository, and set it up. It'll then start the browser-based interface like before.

An NSFW option is present in the interface, for those people who are unable to run their prompts without hitting the NSFW filter incorrectly.

Is it stable?

It has run successfully for a number of users, but I would love to know if it works on more computers. Please let me know if it works or fails in this thread, it'll be really helpful! Thanks :)

PS: There's a new Discord server for support and development discussions: https://discord.com/invite/u9yhsFmEkB . Please join in for faster discussion and feedback on v2.

[feature request] Save button for images and set a random name to it

Randomize the newly created image name, so that the filename is unique after you right-click and press 'Save'. Right now it always shows 'index.png' or something default.

Probably need to set this inside the base64 value.

Additionally, a save button would be good, obviously.

Make automatic update an option

I think currently when you run the start script, it automatically pulls. That feels like a risky proposition as in the event of a bad commit, you run the risk of mangling everyones install with a bad update and saddling yourself with a pile of "mine no longer works!" mails.

If it's an option like a switch on the start script, it also gives folks the choice of not updating if they have a working setup that they're happy with and don't want the changes.

File names don't have prompt in them.

uuid file names make experimenting cumbersome. it means you need to do a lot of manual note taking and organization to keep things straight. Baking the prompt and relevant values into the file name would allow this organization to happen after the fact and a lot more rapidly. Leading to easier sharing and experimentation.

Troubleshooting Guide in README.md

There are some common issues I see in the support channel on Discord. I think an addendum to the README listing common issues (e.g., Windows path length limit, etc.) and how to solve them would be a great addition, maybe in an aptly named, dedicated section called "FAQ" or "Common Issues".

conda: command not found

Hi, i cloned the repo on a GCP VM and followed the installation steps

Im getting the following error when running build.sh (start.sh doesn't exist on the main branch anymore)

logs
sudo bash ./build.sh

Downloading components for the installer..
./build.sh: line 7: /root/miniconda3/etc/profile.d/conda.sh: No such file or directory
./build.sh: line 9: conda: command not found
./build.sh: line 11: conda: command not found
./build.sh: line 12: conda: command not found
Creating a distributable package..
./build.sh: line 16: conda: command not found
mkdir: cannot create directory ‘installer’: File exists
tar: ../../installer.tar: Cannot open: No such file or directory
tar: Error is not recoverable: exiting now
mkdir: cannot create directory ‘scripts’: File exists
Build ready. Zip the 'dist/stable-diffusion-ui' folder.
Cleaning up..
rm: cannot remove 'installer.tar': No such file or directory

**~/stable-diffusion-ui$ uname -a**
Linux instance-2 4.19.0-21-cloud-amd64 #1 SMP Debian 4.19.249-2 (2022-06-30) x86_64 GNU/Linux

specs
image

[Suggestion] Add elapsed time counter

A time counter that counts upwards while waiting for the generated image.
Optionally, also use last (or mean) time to count downwards while waiting. Maybe not 100 % reliable, but still an idea. (personally I think it's enough with a upwards counting clock)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.