Giter VIP home page Giter VIP logo

comfyui's Introduction

Docker Build

AI-Dock + ComfyUI Docker Image

Run ComfyUI in a cloud-first AI-Dock container.

Note

These images do not bundle models or third-party configurations. You should use a provisioning script to automatically configure your container. You can find examples in config/provisioning.

Documentation

All AI-Dock containers share a common base which is designed to make running on cloud services such as vast.ai and runpod.io as straightforward and user friendly as possible.

Common features and options are documented in the base wiki but any additional features unique to this image will be detailed below.

Version Tags

The :latest tag points to :latest-cuda and will relate to a stable and tested version. There may be more recent builds

Tags follow these patterns:

CUDA
  • :cuda-[x.x.x-base|runtime]-[ubuntu-version]
ROCm
  • :rocm-[x.x.x-runtime]-[ubuntu-version]
CPU
  • :cpu-[ubuntu-version]

Browse here for an image suitable for your target environment.

Supported Platforms: NVIDIA CUDA, AMD ROCm, CPU

Additional Environment Variables

Variable Description
AUTO_UPDATE Update ComfyUI on startup (default false)
COMFYUI_BRANCH ComfyUI branch/commit hash for auto update (default master)
COMFYUI_FLAGS Startup flags. eg. --gpu-only --highvram
COMFYUI_PORT_HOST ComfyUI interface port (default 8188)
COMFYUI_URL Override $DIRECT_ADDRESS:port with URL for ComfyUI

See the base environment variables here for more configuration options.

Additional Micromamba Environments

Environment Packages
comfyui ComfyUI and dependencies

This micromamba environment will be activated on shell login.

See the base micromamba environments here.

Additional Services

The following services will be launched alongside the default services provided by the base image.

ComfyUI

The service will launch on port 8188 unless you have specified an override with COMFYUI_PORT_HOST.

ComfyUI will be updated to the latest version on container start. You can pin the version to a branch or commit hash by setting the COMFYUI_BRANCH variable.

You can set startup flags by using variable COMFYUI_FLAGS.

To manage this service you can use supervisorctl [start|stop|restart] comfyui.

ComfyUI RP API

This service is available on port 8188 and is used to test the RunPod serverless API.

You can access the api directly at /rp-api/runsync or you can use the Swager/openAPI playground at /rp-api.

There are several example payloads included in this repository.

This API is available on all platforms - But the container can ony run in serverless mode on RunPod infrastructure.

To learn more about the serverless API see the serverless section

API Playground

Note

All services are password protected by default. See the security and environment variables documentation for more information.

Pre-Configured Templates

Vast.​ai


Runpod.​io


RunPod Serverless

The container can be used as a RunPod serverless worker. To enable serverless mode you must run the container with environment variables SERVERLESS=true and WORKSPACE=/runpod-volume.

The handlers will accept a job, process it and upload your images to s3 compatible storage.

You may either set your s3 credentials as environment variables or you can pass them to the worker in the payload.

You should set AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_ENDPOINT_URL and AWS_BUCKET_NAME.

Serverless template example

If passed in the payload these variables should be in lowercase.

Incorrect or unset s3 credentials will not resut in job failure. You can still retrieve your images from the network volume.

When used in serverless mode, the container will skip provisioning and will not update ComfyUI or the nodes on start so you must either ensure everyting you need is built into the image (see Building the Image) or first run the container with a network volume in GPU Cloud to get everything set up before launching your workers.

After launching a serverless worker, any instances of the container launched on the network volume in GPU cloud will also skip auto-updating. All updates must be done manually.

The API is documented in openapi format. You can test it in a running container on the ComfyUI port at /rp-api/docs - See ComfyUI RP API for more information.


The API can use multiple handlers which you may define in the payload. Three handlers have been included for your convenience

Handler: RawWorkflow

This handler should be passed a full ComfyUI workflow in the payload. It will detect any URL's and download the files into the input directory before replacing the URL value with the local path of the resource. This is very useful when working with image to image and controlnets.

This is the most flexible of all handlers.

RawWorkflow schema

Example payload

Handler: Text2Image

This is a basic handler that is bound to a static workflow file (/opt/serverless/workflows/text2image.json).

You can define several overrides to modify the workflow before processing.

Text2Image schema

Example payload

Handler: Image2Image

This is a basic handler that is bound to a static workflow file (/opt/serverless/workflows/image2image.json).

You can define several overrides to modify the workflow before processing.

Image2Image schema

Example payload

These handlers demonstrate how you can create a simple endpoint which will require very little frontend work to implement.

You can find example payloads for these handlers here


The author (@robballantyne) may be compensated if you sign up to services linked in this document. Testing multiple variants of GPU images in many different environments is both costly and time-consuming; This helps to offset costs

comfyui's People

Contributors

goroshevsky avatar robballantyne avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

comfyui's Issues

AWS S3 FS with Fuse

Hi,

I've managed to run this on an AWS EC2 with T4 GPU.
I'd like to launch a more powerful instance for quicker video generation, however I'd like to maintain 1 single setup at the lowest cost possible.
First I came out with an idea of having all the workspace stuff in S3 mounted into the instances using Fuse S3 filesystem. I begun to implement it:

  1. mount the S3 bucket
  2. rsync workspace to the bucket
  3. removed the original workspace
  4. symlink the S3 mount point to where the original workspace was

ComfyUI kept working, I was very happy.

The problems arisen when I re-created the instance, mounted the S3 bucket, then docker run pointing workspace to the S3 bucket where the workspace had been created previously. Here I got ACL issues first, then after switching it off via env var, remove container and re-execute docker run, I realised that a new directory called ComfyUI-link was created - that might have been due to disabling WORKSPACE_SYNC. At this point I decided to turn to the github issue page.

Question is, is it possible to somehow use an S3 bucket for the ComfyUI config including all the models and stuff? What to be taken into account? Why there is a ComfyUI-link directory?

Thank you!

Video2Video on serverless API (Runpod)

I'm trying to achieve faceswap on the serverless API. The API has payload documentation for Image2Image, but is Video2Video not supported or do I have to define the workflow myself?

[Bug] ComfyUI restarting / not working correctly

Using the shipped docker-compose file, the container does not seem to start correctly.
Container logs spam the following lines over and over:

...
comfyui-supervisor-1  | ==> /var/log/supervisor/comfyui.log <==
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 13:49:38,998 INFO spawned: 'comfyui' with pid 514
comfyui-supervisor-1  | 2023-11-13 13:49:39,021 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 13:50:40,388 INFO exited: comfyui (exit status 1; not expected)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/comfyui.log <==
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  | Success
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 13:50:40,388 INFO exited: comfyui (exit status 1; not expected)
comfyui-supervisor-1  | 2023-11-13 13:50:41,004 INFO spawned: 'comfyui' with pid 525
comfyui-supervisor-1  | 2023-11-13 13:50:41,036 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/comfyui.log <==
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 13:50:41,004 INFO spawned: 'comfyui' with pid 525
comfyui-supervisor-1  | 2023-11-13 13:50:41,036 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
...

When trying to access DOCKERMACHINE:8188 caddy seems to work but the container then spits out the following error:

comfyui-supervisor-1  | {"level":"error","ts":1699883724.371419,"logger":"http.log.error","msg":"dial tcp 127.0.0.1:18188: connect: connection refused","request":{"remote_ip":"ACCESSMACHINE","remote_port":"48944","client_ip":"ACCESSMACHINE","proto":"HTTP/1.1","method":"GET","host":"DOCKERMACHINE:8188","uri":"/","headers":{"Authorization":[],"Upgrade-Insecure-Requests":["1"],"User-Agent":["Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/117.0"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8"],"Accept-Language":["en-US,en;q=0.5"],"Accept-Encoding":["gzip, deflate"],"Dnt":["1"],"Connection":["keep-alive"]}},"duration":1.203366855,"status":502,"err_id":"admr5db0z","err_trace":"reverseproxy.statusError (reverseproxy.go:1265)"}

A full log can be found underneath:

Full log
comfyui-supervisor-1  | You have no configured rclone remotes to be mounted
comfyui-supervisor-1  | Looking for config.sh...
comfyui-supervisor-1  | Not found
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/caddy.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/cloudflared.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/comfyui.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/logtail.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/logviewer.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/quicktunnel.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/rclone_mount.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/serverless.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/serviceportal.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/sshd.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Set uid to user 0 succeeded
comfyui-supervisor-1  | 2023-11-13 13:55:00,642 INFO RPC interface 'supervisor' initialized
comfyui-supervisor-1  | 2023-11-13 13:55:00,642 INFO supervisord started with pid 107
comfyui-supervisor-1  | 2023-11-13 13:55:01,644 INFO spawned: 'logtail' with pid 109
comfyui-supervisor-1  | 2023-11-13 13:55:01,645 INFO spawned: 'serverless' with pid 110
comfyui-supervisor-1  | 2023-11-13 13:55:01,645 INFO spawned: 'serviceportal' with pid 111
comfyui-supervisor-1  | 2023-11-13 13:55:01,646 INFO spawned: 'sshd' with pid 112
comfyui-supervisor-1  | 2023-11-13 13:55:01,646 INFO spawned: 'caddy' with pid 114
comfyui-supervisor-1  | 2023-11-13 13:55:01,647 INFO spawned: 'logviewer' with pid 115
comfyui-supervisor-1  | 2023-11-13 13:55:01,647 INFO spawned: 'comfyui' with pid 118
comfyui-supervisor-1  | Starting logtail service...
comfyui-supervisor-1  | 2023-11-13 13:55:01,647 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 13:55:03,647 INFO success: serverless entered RUNNING state, process has stayed up for > than 2 seconds (startsecs)
comfyui-supervisor-1  | Gathering logs...==> /var/log/config.log <==
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/sync.log <==
comfyui-supervisor-1  | Mamba environments already present at /workspace/
comfyui-supervisor-1  | Linking mamba environments to /opt...
comfyui-supervisor-1  | Creating symlink to /workspace/ComfyUI at /opt/ComfyUI
comfyui-supervisor-1  | Creating symlink to /workspace/serverless at /opt/serverless
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/preflight.log <==
comfyui-supervisor-1  | Looking for preflight.sh...
comfyui-supervisor-1  | Updating ComfyUI (master)...
comfyui-supervisor-1  | Already on 'master'
comfyui-supervisor-1  | Your branch is up to date with 'origin/master'.
comfyui-supervisor-1  | Already up to date.
comfyui-supervisor-1  | Success
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/debug.log <==
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/provisioning.log <==
comfyui-supervisor-1  | Looking for provisioning.sh...
comfyui-supervisor-1  |
comfyui-supervisor-1  | ##############################################
comfyui-supervisor-1  | #                                            #
comfyui-supervisor-1  | #          Provisioning container            #
comfyui-supervisor-1  | #                                            #
comfyui-supervisor-1  | #         This will take some time           #
comfyui-supervisor-1  | #                                            #
comfyui-supervisor-1  | # Your container will be ready on completion #
comfyui-supervisor-1  | #                                            #
comfyui-supervisor-1  | ##############################################
comfyui-supervisor-1  |
comfyui-supervisor-1  | Updating node: https://github.com/ltdrdata/ComfyUI-Manager...
comfyui-supervisor-1  | Already up to date.
comfyui-supervisor-1  | Success
comfyui-supervisor-1  | Downloading 3 model(s) to /opt/ComfyUI/models/checkpoints...
comfyui-supervisor-1  | Downloading: https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt
comfyui-supervisor-1  |
comfyui-supervisor-1  | Downloading: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors
comfyui-supervisor-1  |
comfyui-supervisor-1  | Downloading: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0.safetensors
comfyui-supervisor-1  |
comfyui-supervisor-1  | Downloading 4 model(s) to /opt/ComfyUI/models/controlnet...
comfyui-supervisor-1  | Downloading: https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/main/control_canny-fp16.safetensors
comfyui-supervisor-1  |
comfyui-supervisor-1  | Downloading: https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/main/control_openpose-fp16.safetensors
comfyui-supervisor-1  |
comfyui-supervisor-1  | Downloading: https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/main/t2iadapter_canny-fp16.safetensors
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/caddy.log <==
comfyui-supervisor-1  | {"level":"info","ts":1699883703.7733493,"msg":"using provided configuration","config_file":"/opt/caddy/etc/Caddyfile","config_adapter":""}
comfyui-supervisor-1  | {"level":"warn","ts":1699883703.774483,"logger":"http.auto_https","msg":"automatic HTTPS is completely disabled for server","server_name":"srv0"}
comfyui-supervisor-1  | {"level":"warn","ts":1699883703.7744896,"logger":"http.auto_https","msg":"automatic HTTPS is completely disabled for server","server_name":"srv1"}
comfyui-supervisor-1  | {"level":"warn","ts":1699883703.7744915,"logger":"http.auto_https","msg":"automatic HTTPS is completely disabled for server","server_name":"srv2"}
comfyui-supervisor-1  | {"level":"warn","ts":1699883703.7744927,"logger":"http.auto_https","msg":"automatic HTTPS is completely disabled for server","server_name":"srv3"}
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/comfyui.log <==
comfyui-supervisor-1  | Waiting for workspace sync...
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/serverless.log <==
comfyui-supervisor-1  | Refusing to start serverless worker without $SERVERLESS=true
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/serviceportal.log <==
comfyui-supervisor-1  | Starting Service Portal...
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/sshd.log <==
comfyui-supervisor-1  | /root/.ssh/authorized_keys is not a public key file.
comfyui-supervisor-1  | Skipping SSH server: No public key
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/caddy.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/cloudflared.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/comfyui.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/logtail.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/logviewer.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/quicktunnel.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/rclone_mount.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/serverless.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/serviceportal.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Included extra file "/etc/supervisor/supervisord/conf.d/sshd.conf" during parsing
comfyui-supervisor-1  | 2023-11-13 13:55:00,641 INFO Set uid to user 0 succeeded
comfyui-supervisor-1  | 2023-11-13 13:55:00,642 INFO RPC interface 'supervisor' initialized
comfyui-supervisor-1  | 2023-11-13 13:55:00,642 INFO supervisord started with pid 107
comfyui-supervisor-1  | 2023-11-13 13:55:01,644 INFO spawned: 'logtail' with pid 109
comfyui-supervisor-1  | 2023-11-13 13:55:01,645 INFO spawned: 'serverless' with pid 110
comfyui-supervisor-1  | 2023-11-13 13:55:01,645 INFO spawned: 'serviceportal' with pid 111
comfyui-supervisor-1  | 2023-11-13 13:55:01,646 INFO spawned: 'sshd' with pid 112
comfyui-supervisor-1  | 2023-11-13 13:55:01,646 INFO spawned: 'caddy' with pid 114
comfyui-supervisor-1  | 2023-11-13 13:55:01,647 INFO spawned: 'logviewer' with pid 115
comfyui-supervisor-1  | 2023-11-13 13:55:01,647 INFO spawned: 'comfyui' with pid 118
comfyui-supervisor-1  | 2023-11-13 13:55:01,647 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 13:55:03,647 INFO success: serverless entered RUNNING state, process has stayed up for > than 2 seconds (startsecs)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/provisioning.log <==
comfyui-supervisor-1  |
comfyui-supervisor-1  | Downloading: https://huggingface.co/webui/ControlNet-modules-safetensors/resolve/main/t2iadapter_openpose-fp16.safetensors
comfyui-supervisor-1  |
comfyui-supervisor-1  | Downloading 3 model(s) to /opt/ComfyUI/models/vae...
comfyui-supervisor-1  | Downloading: https://huggingface.co/stabilityai/sd-vae-ft-ema-original/resolve/main/vae-ft-ema-560000-ema-pruned.safetensors
comfyui-supervisor-1  |
comfyui-supervisor-1  | Downloading: https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors
comfyui-supervisor-1  |
comfyui-supervisor-1  | Downloading: https://huggingface.co/stabilityai/sdxl-vae/resolve/main/sdxl_vae.safetensors
comfyui-supervisor-1  |
comfyui-supervisor-1  | Downloading 3 model(s) to /opt/ComfyUI/models/upscale_models...
comfyui-supervisor-1  | Downloading: https://huggingface.co/ai-forever/Real-ESRGAN/resolve/main/RealESRGAN_x4.pth
comfyui-supervisor-1  |
comfyui-supervisor-1  | Downloading: https://huggingface.co/FacehugmanIII/4x_foolhardy_Remacri/resolve/main/4x_foolhardy_Remacri.pth
comfyui-supervisor-1  |
comfyui-supervisor-1  | Downloading: https://huggingface.co/Akumetsu971/SD_Anime_Futuristic_Armor/resolve/main/4x_NMKD-Siax_200k.pth
comfyui-supervisor-1  |
comfyui-supervisor-1  |
comfyui-supervisor-1  | Provisioning complete:  Web UI will start now
comfyui-supervisor-1  |
comfyui-supervisor-1  | 2023-11-13 13:55:06,649 INFO success: logtail entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 13:55:06,649 INFO success: serviceportal entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 13:55:06,649 INFO success: sshd entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 13:55:06,649 INFO success: caddy entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 13:55:06,649 INFO success: logviewer entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 13:55:06,649 INFO success: logtail entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 13:55:06,649 INFO success: serviceportal entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 13:55:06,649 INFO success: sshd entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 13:55:06,649 INFO success: caddy entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 13:55:06,649 INFO success: logviewer entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 13:55:11,647 INFO exited: serverless (exit status 0; expected)
comfyui-supervisor-1  | 2023-11-13 13:55:11,649 INFO exited: sshd (exit status 0; expected)
comfyui-supervisor-1  | 2023-11-13 13:55:11,647 INFO exited: serverless (exit status 0; expected)
comfyui-supervisor-1  | 2023-11-13 13:55:11,649 INFO exited: sshd (exit status 0; expected)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/caddy.log <==
comfyui-supervisor-1  | {"level":"error","ts":1699883724.371419,"logger":"http.log.error","msg":"dial tcp 127.0.0.1:18188: connect: connection refused","request":{"remote_ip":"ACCESSMACHINE","remote_port":"48944","client_ip":"ACCESSMACHINE","proto":"HTTP/1.1","method":"GET","host":"DOCKERMACHINE:8188","uri":"/","headers":{"Authorization":[],"Upgrade-Insecure-Requests":["1"],"User-Agent":["Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/117.0"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8"],"Accept-Language":["en-US,en;q=0.5"],"Accept-Encoding":["gzip, deflate"],"Dnt":["1"],"Connection":["keep-alive"]}},"duration":1.203366855,"status":502,"err_id":"admr5db0z","err_trace":"reverseproxy.statusError (reverseproxy.go:1265)"}
comfyui-supervisor-1  | 2023-11-13 13:56:06,635 INFO exited: comfyui (exit status 1; not expected)
comfyui-supervisor-1  | 2023-11-13 13:56:06,636 INFO spawned: 'comfyui' with pid 395
comfyui-supervisor-1  | 2023-11-13 13:56:06,650 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/comfyui.log <==
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  | Success
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/serviceportal.log <==
comfyui-supervisor-1  | INFO:     Started server process [111]
comfyui-supervisor-1  | INFO:     Waiting for application startup.
comfyui-supervisor-1  | INFO:     Application startup complete.
comfyui-supervisor-1  | INFO:     Uvicorn running on http://127.0.0.1:11111 (Press CTRL+C to quit)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 13:56:06,635 INFO exited: comfyui (exit status 1; not expected)
comfyui-supervisor-1  | 2023-11-13 13:56:06,636 INFO spawned: 'comfyui' with pid 395
comfyui-supervisor-1  | 2023-11-13 13:56:06,650 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 13:57:08,255 INFO exited: comfyui (exit status 1; not expected)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/comfyui.log <==
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  | Success
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 13:57:08,255 INFO exited: comfyui (exit status 1; not expected)
comfyui-supervisor-1  | 2023-11-13 13:57:08,665 INFO spawned: 'comfyui' with pid 406
comfyui-supervisor-1  | 2023-11-13 13:57:08,685 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/comfyui.log <==
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 13:57:08,665 INFO spawned: 'comfyui' with pid 406
comfyui-supervisor-1  | 2023-11-13 13:57:08,685 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 13:58:10,027 INFO exited: comfyui (exit status 1; not expected)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/comfyui.log <==
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  | Success
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 13:58:10,027 INFO exited: comfyui (exit status 1; not expected)
comfyui-supervisor-1  | 2023-11-13 13:58:10,672 INFO spawned: 'comfyui' with pid 417
comfyui-supervisor-1  | 2023-11-13 13:58:10,690 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/comfyui.log <==
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 13:58:10,672 INFO spawned: 'comfyui' with pid 417
comfyui-supervisor-1  | 2023-11-13 13:58:10,690 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 13:59:12,000 INFO exited: comfyui (exit status 1; not expected)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/comfyui.log <==
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  | Success
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 13:59:12,000 INFO exited: comfyui (exit status 1; not expected)
comfyui-supervisor-1  | 2023-11-13 13:59:12,679 INFO spawned: 'comfyui' with pid 428
comfyui-supervisor-1  | 2023-11-13 13:59:12,700 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/comfyui.log <==
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 13:59:12,679 INFO spawned: 'comfyui' with pid 428
comfyui-supervisor-1  | 2023-11-13 13:59:12,700 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 14:00:13,911 INFO exited: comfyui (exit status 1; not expected)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/comfyui.log <==
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  | Success
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 14:00:13,911 INFO exited: comfyui (exit status 1; not expected)
comfyui-supervisor-1  | 2023-11-13 14:00:14,685 INFO spawned: 'comfyui' with pid 439
comfyui-supervisor-1  | 2023-11-13 14:00:14,707 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/comfyui.log <==
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 14:00:14,685 INFO spawned: 'comfyui' with pid 439
comfyui-supervisor-1  | 2023-11-13 14:00:14,707 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 14:01:15,701 INFO exited: comfyui (exit status 1; not expected)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/comfyui.log <==
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  | Success
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 14:01:15,701 INFO exited: comfyui (exit status 1; not expected)
comfyui-supervisor-1  | 2023-11-13 14:01:16,692 INFO spawned: 'comfyui' with pid 450
comfyui-supervisor-1  | 2023-11-13 14:01:16,718 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/comfyui.log <==
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 14:01:16,692 INFO spawned: 'comfyui' with pid 450
comfyui-supervisor-1  | 2023-11-13 14:01:16,718 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
comfyui-supervisor-1  | 2023-11-13 14:02:18,180 INFO exited: comfyui (exit status 1; not expected)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/comfyui.log <==
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  | Success
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 14:02:18,180 INFO exited: comfyui (exit status 1; not expected)
comfyui-supervisor-1  | 2023-11-13 14:02:18,698 INFO spawned: 'comfyui' with pid 461
comfyui-supervisor-1  | 2023-11-13 14:02:18,720 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/comfyui.log <==
comfyui-supervisor-1  | Starting ComfyUI...
comfyui-supervisor-1  |
comfyui-supervisor-1  | ==> /var/log/supervisor/supervisor.log <==
comfyui-supervisor-1  | 2023-11-13 14:02:18,698 INFO spawned: 'comfyui' with pid 461
comfyui-supervisor-1  | 2023-11-13 14:02:18,720 INFO success: comfyui entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)

Any way to further diagnose the problem?

Not all paths are mapped in comfyui:pytorch-2.2.0-py3.10-cuda-12.1.0-runtime-22.04

I have pulled and tested comfyui:pytorch-2.2.0-py3.10-cuda-12.1.0-runtime-22.04 today.
I found Not all paths are mapped in mappings.sh.
Every time the container is restarted, some paths will randomly become unmapped.
For example, this time it is the checkpoints directory, and the next time it is the controlnet directory.

But all will work fine in comfyui:pytorchpytorch-2.1.1-py3.10-cuda-12.1.0-base-22.04

Error while following basic setup > "mounted on / but it is not a shared mount"

I am getting the following error while running the docker compose up, even without setting any .env, or modifying anything in docker-compose.yml.

$ docker compose up -d
[+] Running 2/2
 ✔ Network comfyui_default         Created                                                                                                                                   0.1s
 ✔ Container comfyui-supervisor-1  Created                                                                                                                                   0.1s
Error response from daemon: path /home/ubuntu/comfyui/workspace is mounted on / but it is not a shared mount

Tried running some commands like following without any success:
sudo mount --make-shared /volume1/

This is an EC2 - g4dn.xlarge type, running ubuntu 22.04.
Let me know if any more info is needed. Need help!

Thanks

Runpod Not Saving workspace on restart

With other containers i've used everything in the workspace persists between machine uptime and down time. But that does not seem to be happening here, all custom nodes and models get wiped out, maybe I've mis configured something. I am not using a network volume in this case so I turned off workspace sync in env

`fatal: destination path 'ComfyUI' already exists and is not an empty directory` when running `set -eo pipefail && /opt/ai-dock/bin/build/layer0/init.sh`

AMD R4750G APU × 16GB UMA
Artix Linux
Latest ComfyUI dock Git trees
system/linux 6.7.4.artix1-1 [installed]
system/python 3.11.7-1 [installed]
world/comgr 6.0.0-1 [installed]
world/hsa-rocr 6.0.0-2 [installed]
world/rocm-device-libs 6.0.0-1 [installed]
extra/comgr 6.0.0-1 [installed]
extra/hsa-rocr 6.0.0-2 [installed]
extra/rocm-core 6.0.0-2 [installed]
extra/rocm-device-libs 6.0.0-1 [installed]
extra/rocm-language-runtime 6.0.0-1 [installed]
extra/rocm-opencl-runtime 6.0.0-1 [installed]
extra/rocm-opencl-sdk 6.0.0-1 [installed]

Caddy not starting

Hey,

I tried with the latest changes, but caddy doesn't seem to be starting anymore:

comfyui-supervisor-1 | ==> /var/log/supervisor/comfyui.log <== comfyui-supervisor-1 | xformers version: 0.0.22 comfyui-supervisor-1 | Set vram state to: HIGH_VRAM comfyui-supervisor-1 | Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync comfyui-supervisor-1 | VAE dtype: torch.bfloat16 comfyui-supervisor-1 | Using xformers cross attention comfyui-supervisor-1 | ### Loading: ComfyUI-Manager (V1.6.4) comfyui-supervisor-1 | ### ComfyUI Revision: 1779 [9b655d4f] | Released on '2023-12-04' comfyui-supervisor-1 | comfyui-supervisor-1 | Import times for custom nodes: comfyui-supervisor-1 | 0.1 seconds: /opt/ComfyUI/custom_nodes/ComfyUI-Manager comfyui-supervisor-1 | comfyui-supervisor-1 | Starting server comfyui-supervisor-1 | comfyui-supervisor-1 | To see the GUI go to: http://127.0.0.1:18188 comfyui-supervisor-1 | 2023-12-05 17:23:37,376 INFO exited: caddy (exit status 1; not expected) comfyui-supervisor-1 | comfyui-supervisor-1 | ==> /var/log/supervisor/caddy.log <== comfyui-supervisor-1 | {"level":"info","ts":1701797017.3712878,"msg":"using provided configuration","config_file":"/opt/caddy/etc/Caddyfile","config_adapter":""} comfyui-supervisor-1 | {"level":"warn","ts":1701797017.3735251,"logger":"http.auto_https","msg":"automatic HTTPS is completely disabled for server","server_name":"srv0"} comfyui-supervisor-1 | {"level":"warn","ts":1701797017.3736062,"logger":"http.auto_https","msg":"automatic HTTPS is completely disabled for server","server_name":"srv1"} comfyui-supervisor-1 | {"level":"warn","ts":1701797017.3736272,"logger":"http.auto_https","msg":"automatic HTTPS is completely disabled for server","server_name":"srv2"} comfyui-supervisor-1 | {"level":"warn","ts":1701797017.373632,"logger":"http.auto_https","msg":"automatic HTTPS is completely disabled for server","server_name":"srv3"} comfyui-supervisor-1 | Error: loading initial config: loading new config: http app module: start: listening on :11111: listen tcp :11111: bind: address already in use comfyui-supervisor-1 | comfyui-supervisor-1 | ==> /var/log/supervisor/supervisor.log <== comfyui-supervisor-1 | 2023-12-05 17:23:37,376 INFO exited: caddy (exit status 1; not expected) comfyui-supervisor-1 | 2023-12-05 17:23:38,362 INFO gave up: caddy entered FATAL state, too many start retries too quickly comfyui-supervisor-1 | 2023-12-05 17:23:38,362 INFO gave up: caddy entered FATAL state, too many start retries too quickly

None of the ports ([1111:1111⁠][2222:22⁠][53682:53682⁠][8188:8188⁠][8888:8888⁠]) respond.

Considering the port 11111 bind: address already in use, I guess it's an conflict in the docker itself and not the host system?

Building the docker image fails

hi, I've cloned your repo, and I'm trying to build the docker image as is. I'm using ubuntu in wsl2, I'm getting this error:

_28.95 Processing triggers for man-db (2.10.2-1) ...
29.42 warning libmamba Cache file "/opt/micromamba/pkgs/cache/ee0ed9e9.json" was modified by another program
29.42 warning libmamba Cache file "/opt/micromamba/pkgs/cache/edb1952f.json" was modified by another program
29.42 warning libmamba Cache file "/opt/micromamba/pkgs/cache/c9ddbd6b.json" was modified by another program
29.42 warning libmamba Cache file "/opt/micromamba/pkgs/cache/b121c3e7.json" was modified by another program
29.42 warning libmamba Cache file "/opt/micromamba/pkgs/cache/497deca9.json" was modified by another program
29.42 warning libmamba Cache file "/opt/micromamba/pkgs/cache/09cdf8bf.json" was modified by another program
46.80 error libmamba Could not solve for environment specs
46.80 The following package could not be installed
46.80 └─ libglib 2.78.4 h4648e47_1 does not exist (perhaps a typo or a missing channel).
46.81 critical libmamba Could not solve for environment specs

Dockerfile:20

18 |
19 | ARG IMAGE_BASE
20 | >>> RUN set -eo pipefail && /opt/ai-dock/bin/build/layer0/init.sh | tee /var/log/build.log
21 |
22 | # Must be set after layer0

ERROR: failed to solve: process "/bin/bash -c set -eo pipefail && /opt/ai-dock/bin/build/layer0/init.sh | tee /var/log/build.log" did not complete successfully: exit code: 1
ERROR: Service 'supervisor' failed to build : Build failed_

the issue happens from /build/COPY_ROOT/opt/ai-dock/bin/build/layer0/common.sh. Specifically on this line: $MAMBA_CREATE -n comfyui --file "${exported_env}", I can see the library and version in the error above that it's present in the list of dependencies exported to create the comfyui env. I'm not that familiar with micromamba. If this is an easy fix it'd be great to get it fixed, thanks.

FFmpeg should be upgraded to the latest version

When i using "Video Combine" node of Kosinkadink/ComfyUI-VideoHelperSuite, there is an error:

...
subprocess.CalledProcessError: Command '['/opt/micromamba/envs/comfyui/bin/ffmpeg'
...
Exception: An error occured in the ffmpeg subprocess:
Unrecognized option 'crf'
...

After I manually upgraded FFmpeg (ffmpeg version 4.4.2-0ubuntu0.22.04.1), it was fine.

How to set Python Package Index mirror?

I have changed Env variable PIP_INSTALL="pip install --no-cache-dir -i https://mirror-site/pypi/simple".
When I manually run "micromamba -n comfyui run ${PIP_INSTALL} xxx", the index works.

But on "docker compose up" , the mirror index doesn't seem to work, the download speed is very slow

supervisor-1  | Downloading diffusers-0.26.3-py3-none-any.whl (1.9 MB)
supervisor-1  |    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.9/1.9 MB _16.4 kB/s eta_ 0:00:00

Request for help with docker image with gfx803 support

Hello!

Pre-history:
I'm a newbie into docker and AI stuff, so I haven't managed to do it by myself for 3 days already.
I have a Radeon RX580, which is gfx803, and ROCm dropped it's official support as I know in v.4.0. Yet by data from internet it's still possible to use at least 5.4.3 for sure. And I've used Auto1111 webUI on Linux Mint and it worked. But now I'm on Arch and can't install neither Auto1111, nor ComfyUI. Found webUI docker image for gfx803, and it works, but it's made from the first release of webUI and lacks most of functionality. And also I liked the idea of ComfyUI when I found it out.

Request for help:
Can anyone help to make a docker image with requirements needed for gfx803 support? As it needs torch built with it's flag enabled, while pre-built packages of it are only in .deb files and I wasn't able to build it successfully. Raw ComfyUI w/o docker starts on my PC, but prompts can't be made, as with current ROCm and torch it ends with segmentation fault (same on Auto webUI w/o docker).
With my still very limited amount of knowledge I am unable to make it without help.

Will be very grateful if someone is able to help!

Runpod deployment is getting stuck and pods staying in throttled state

After I followed the guidance in the issue below, I have changed the IMAGE_BASE to ghcr.io/ai-dock/jupyter-pytorch:2.1.1-py3.10-cuda-11.8.0-base-22.04 after forking the repository and added my own models/custom-nodes to COPY_ROOT_EXTRA and triggered the GitHub pipeline to build the Docker images. I have used the https://github.com/berkorg/comfy-docker/pkgs/container/comfy-docker/156936256?tag=pytorch-2.0.1-py3.10-cuda-11.8.0-base-22.04 image and created a new template in RubPod.

But, in RubPod the endpoint cannot initialize itself and does not also log anything. It stays in the state below:

Screenshot 2023-12-12 at 03 32 05

Can you please help me out?

How to Sync with RunPod Network Volume

Hi, I'm deploying the runpod Image on GPU Cloud options of RunPod with a Network Storage.
But when I terminate the pod, the data doesn't persist on the network volume.
I'm not understanding how can I acchieve this. I'm looking to use this same Network Volume with all my pre-configured with ComfyUi-Manager models and custom_nodes on the serverless cloud.

Maybe this is what WORKSPACE_SYNC=true Does ? Because I set it false but runpod stucks on the sync.

Outputting bas64 gives `can only concatenate str (not \"bytes\") to str` error

I have edited build/COPY_ROOT/opt/serverless/handlers/basehandler.py file and used the existing image_to_base64 function to output as base64 instead of uploading the generated image to S3 bucket but I am getting can only concatenate str (not \"bytes\") to str error. this below is my code for get_result function. Is there a problem with the base64 conversion?

    def get_result(self, job_id):
        result = requests.get(self.ENDPOINT_HISTORY).json()[self.comfyui_job_id]

        prompt = result["prompt"]
        outputs = result["outputs"]

        self.result = {
            "images": [],
            "timings": {}
        }
        
        custom_output_dir = f"{self.OUTPUT_DIR}{self.request_id}"
        os.makedirs(custom_output_dir, exist_ok = True)
        for item in outputs:
            if "images" in outputs[item]:
                for image in outputs[item]["images"]:
                    original_path = f"{self.OUTPUT_DIR}{image['subfolder']}/{image['filename']}"
                    new_path = f"{custom_output_dir}/{image['filename']}"
                    # Handle duplicated request where output file in not re-generated
                    if os.path.islink(original_path):
                        shutil.copyfile(os.path.realpath(original_path), new_path)
                    else:
                        os.rename(original_path, new_path)
                        os.symlink(new_path, original_path)
                    key = f"{self.request_id}/{image['filename']}"
                    self.result["images"].append({
                        "local_path": new_path,
                        "base64": self.image_to_base64(new_path),
                        # make this work first, then threads
                        # "url": self.s3utils.file_upload(new_path, key)
                    })
        
        self.job_time_completed = datetime.datetime.now()
        self.result["timings"] = {
            "job_time_received": self.job_time_received.ctime(),
            "job_time_queued": self.job_time_queued.ctime(),
            "job_time_processed": self.job_time_processed.ctime(),
            "job_time_completed": self.job_time_completed.ctime(),
            "job_time_total": (self.job_time_completed - self.job_time_received).seconds
        }

        return self.result

and this is the existing image_to_base64 function:

    def image_to_base64(self, path):
        with open(path, "rb") as f:
            b64 = (base64.b64encode(f.read()))
        return "data:image/png;charset=utf-8;base64, " + b64

ComfyUI commit SHA versioning

Would it be possible to include some way to have the comfyUI commit SHA in the build? Currently it seems like the container will clone the latest version of comfyui every time it starts. Which may not be good for all use cases.

Happy to help contribute a PR after some discussion!

Swarm-compatible build

Hi there! I'm trying to build the container, tagged to my private registry, but I'm running into build issues. I'm still adding some echo blocks to the scripts to see where it's stopping, but it seems it's breaking in the nvidia.sh script .

During the build, there's no output past my echos:

34.45
34.45 Transaction finished
34.45
34.45 To activate this environment, use:
34.45
34.45     micromamba activate comfyui
34.45
34.45 Or to execute a single command in this environment, use:
34.45
34.45     micromamba run -n comfyui mycommand
34.45
34.48 Installing jupyter kernels...
34.48 Cloning ComfyUI...
34.48 Cloning into 'ComfyUI'...
35.32 Completed common.sh processes.
35.32 INIT::Running nvidia script...
35.32 NVIDIA::Installing ComfyUI...
35.32 PIP_INSTALL = pip install --no-cache-dir
35.32 PYTORCH_VERSION = 2.1.0
35.33 Success
XXXXX <--- I should see an echo for "Running update script..." here if it got through the micromamba command
------
Dockerfile:13
--------------------
  11 |
  12 |     ARG IMAGE_BASE
  13 | >>> RUN /opt/ai-dock/bin/build/layer0/init.sh
  14 |
  15 |     ENV OPT_SYNC=ComfyUI:serverless:$OPT_SYNC
--------------------
ERROR: failed to solve: process "/bin/bash -c /opt/ai-dock/bin/build/layer0/init.sh" did not complete successfully: exit code: 1```

broken access ssh

024-02-29T07:51:19.551713395Z cp: not writing through dangling symlink '/workspace/home/user/.ssh'
2024-02-29T07:51:19.559099834Z chown: changing ownership of '/workspace/home/user/.ssh': Operation not permitted
2024-02-29T07:51:19.566694095Z chmod: cannot access '/workspace/home/user/.ssh/authorized_keys': No such file or directory

I have pubkey configured in runpod, but it won't let me into ssh with a response:
The public key was working a day ago, there was no need to change or affect it in any way (after startup, there is a correct authorized_keys file inside the container).

image
I don't know what log to look at for this situation.

I tested it on
comfyui:latest
comfyui:latest-cuda
webui:latest

same error, and problems with ssh access.
Only webssh from runpod works , but only 1 time, after restart container Connection Closed.

PreviewImage nodes result in copy errors

When I post a workflow json that has a PreviewImage to the API (via /rp-api/runsync), BaseHandler is unable to copy a file:

DEBUG  | test-c12c72b8-f825-4c0c-916e-68d62d92296d | run_job return: {'error': "[Errno 2] No such file or directory: '/workspace//ComfyUI/output//ComfyUI_temp_mqplo_00001_.png' -> '/workspace//ComfyUI/output/test-c12c72b8-f825-4c0c-916e-68d62d92296d/ComfyUI_temp_mqplo_00001_.png'"}

This is due to the fact that PreviewImage saves the image to the temp directory

It is questionable whether temp images must be returned. I guess it would be great if it can be configured

I propose to not iterate over images with type="temp" for now since otherwise it fails

How to use /rp-api/runsync on pod model.

I deployed the image on runpod by adding GPU Pod.
I can access comfyUI directly by browser. But I was confused how to access it by api mode.
The following statement in doc. Could anyone kindly explained it , or provide some sample code for me.

ComfyUI RP API
This service is available on port 8188 and is used to test the [RunPod serverless](https://link.ai-dock.org/runpod-serverless) API.

You can access the api directly at /rp-api/runsync or you can use the Swager/openAPI playground at /rp-api.

There are several [example payloads](https://github.com/ai-dock/comfyui/tree/main/build/COPY_ROOT/opt/serverless/docs/example_payloads) included in this repository.

Your help is most appreciated.

runpod serverless API result getting volume error

Hi :) First, thank you for really great opensource.
I am almost at the last step setting runpod serverless, but getting
image
[Errno 2] No such file or directory: '/runpod-volume//ComfyUI/output//ComfyUI_temp_cprvf_00001_.png' -> '/runpod-volume//ComfyUI/output/5a59a89c-2f46-4df3-b387-98b19ff88777-e1/ComfyUI_temp_cprvf_00001_.png' this error when I run runpod serverless api.

  • I've built custom image based on this repository, followed this guide, deployed on runpod serverless
  • this is my environment setting (hid aws values), image
  • I used this format as a request

-----(edited)
I've tried editing basehandler.py and rebuild the image

    # INPUT_DIR=f"{os.environ.get('WORKSPACE')}/ComfyUI/input/"
    # OUTPUT_DIR=f"{os.environ.get('WORKSPACE')}/ComfyUI/output/"
    INPUT_DIR=f"{os.environ.get('WORKSPACE')}ComfyUI/input"
    OUTPUT_DIR=f"{os.environ.get('WORKSPACE')}ComfyUI/output"

then I got [Errno 2] No such file or directory: '/runpod-volume/ComfyUI/output/ComfyUI_temp_ottah_00001_.png' -> '/runpod-volume/ComfyUI/outpute6f68e40-83a3-472e-b576-b96645e2b852-e1/ComfyUI_temp_ottah_00001_.png' this error. Do I need initialization of

Hope I can get help 🙏

How to become a tester myself for AMD/pacman platform?

My system info has been stated myself in reply to an old issue (i’m at work now) ☺️

Main system  : ASRock DeskMini X300 AMD
APU          : AMD R4750G Zen2 with 16GB UMA (graphical memories)
AMDGPU driver: 23.0.0-1 (X11)
ROCm         : 5.7.1
Python       : 3.11.6-1
Gross RAM's  : 48GB DDR4
OS           : Artix Linux amd64 with Linux kernel 6.1.70
Docker       : 24.0.7-1 / Compose 2.24.3-1
UI browser   : Brave 1.63.131-1

And Rob doesn’t have time to work around for AMD/pacman and my every mornings, every afternoons on every Sundays, and every entire Saturdays are free to follow such kind of stuff 🌞
So? How to become a tester and even an AMD/pacman contributre myself? 😎😉🤠😇

Serverless worker doesn't respect aws s3 region

In the request I put "aws_endpoint_url": "https://s3.eu-central-1.amazonaws.com", but the output image link gives an error:

<Error>
  <Code>AuthorizationQueryParametersError</Code>
  <Message>Error parsing the X-Amz-Credential parameter; the region 'us-east-1' is wrong; expecting 'eu-central-1'</Message>
  <Region>eu-central-1</Region>
  <RequestId>9G9706QZWPMWC3ZY</RequestId>
  <HostId>XWklCWgRIqcY8Xl7S7aSwZtXi97/KZfWUW6bJLfafVU3K9/tjmtVidH3SktNlz6ZmHxcVjIBQAQ=</HostId>
</Error>

Did I set up the endpoint url incorrectly?

New Error

Hi! Thanks for reviewing these errors. Got a new error today that delays the loading of the pod and of images on ComfyUI. Unsure what caused this.

==> /var/log/supervisor/caddy.error.log <==
{"level":"error","ts":1698128485.8749824,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","upstream":"localhost:18188","duration":0.001717137,"request":{"remote_ip":"100.64.0.21","remote_port":"40104","client_ip":"100.64.0.21","proto":"HTTP/1.1","method":"GET","host":"100.65.13.44:60326","uri":"/lib/litegraph.core.js","headers":{"Remote-User":["{http.reverse_proxy.header.Remote-User}"],"Cdn-Loop":["cloudflare"],"Sec-Fetch-Site":["same-origin"],"Cf-Ray":["81b01d9ebbe82a8f-LAX"],"Referer":["https://dnxk8p80nrvs08-8188.proxy.runpod.net/"],"Cookie":[],"Accept-Encoding":["gzip, br"],"X-Forwarded-For":["100.64.0.21"],"Cf-Connecting-Ip":["2600:1700:6bfc:4750:509a:3876:80bb:bc3f"],"Remote-Groups":["{http.reverse_proxy.header.Remote-Groups}"],"Authorization":[],"User-Agent":["Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36"],"Remote-Name":["{http.reverse_proxy.header.Remote-Name}"],"Accept":["/"],"Sec-Ch-Ua":[""Chromium";v="118", "Google Chrome";v="118", "Not=A?Brand";v="99""],"Sec-Ch-Ua-Mobile":["?0"],"Remote-Email":["{http.reverse_proxy.header.Remote-Email}"],"Cf-Visitor":["{"scheme":"https"}"],"Cf-Ipcountry":["US"],"Accept-Language":["en-US,en;q=0.9,ja;q=0.8"],"X-Forwarded-Proto":["http"],"X-Forwarded-Host":["100.65.13.44:60326"],"Sec-Ch-Ua-Platform":[""macOS""],"Sec-Fetch-Dest":["script"],"Sec-Fetch-Mode":["no-cors"]}},"error":"writing: write tcp 172.24.0.2:8188->100.64.0.21:40104: write: broken pipe"}

Provisioning permission denied

When running provisioning i get permission errors:

ln: failed to create symbolic link '/opt/ComfyUI/models/upscale_models/RealESRGAN_x2.pth': Permission denied
ln: failed to create symbolic link '/opt/ComfyUI/models/checkpoints/DreamShaper_8_INPAINTING.inpainting.safetensors': Permission denied

This is the case for all model types (ckpt, esrgan, vae)

I run the image on runpod gpu cloud. I forked the repo and built the image using docker compose build I didnt change the image since it last worked.

I had no problems until today when I tried to redeploy my pod.
Could this be caused by the recent commits to this repo? I created a separate fork so that shouldn't be the case.

Animate: Save Video?

Hi, I've been playing around with the animate provisioning script and it all works well apart from, when i try to set the output format to any type of video rather than gif i get the following error:

2023-10-27 22:08:21 /opt/micromamba/envs/comfyui/bin/ffmpeg: error while loading shared libraries: libopenh264.so.5: cannot open shared object file: No such file or directory
2023-10-27 22:08:21 ERROR:root:!!! Exception during processing !!!
2023-10-27 22:08:21 ERROR:root:Traceback (most recent call last):
2023-10-27 22:08:21   File "/workspace/ComfyUI/execution.py", line 153, in recursive_execute
2023-10-27 22:08:21     output_data, output_ui = get_output_data(obj, input_data_all)
2023-10-27 22:08:21   File "/workspace/ComfyUI/execution.py", line 83, in get_output_data
2023-10-27 22:08:21     return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
2023-10-27 22:08:21   File "/workspace/ComfyUI/execution.py", line 76, in map_node_over_list
2023-10-27 22:08:21     results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
2023-10-27 22:08:21   File "/workspace/ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/nodes.py", line 195, in combine_video
2023-10-27 22:08:21     proc.stdin.write(frame.tobytes())
2023-10-27 22:08:21 BrokenPipeError: [Errno 32] Broken pipe

Hoping there is an easy solution to add the missing lib? Unfortunately I'm not able to figure out how to do it myself :(

micromamba: command not found

I cloned the repo on my local machine, changed the enviorement variables and ran docker compose up

But when the server is launched I get this error message:

image

micromamba environment and ComfyUI both 20+ GB

I've built the latest and pushed to docker hub and run it on runpod with all the models comment out and just 3 nodes. And I'm using a network volume on runpod for all the models I need. I have a 50GB volume and it's 95% full. Both ComfyUI and micromamba are coming in at around 20-25GB. I have my models pulled down from my network volume, so it makes sense that ComfyUI is that big, but does micromamba have to be that big?

Screenshot 2024-01-07 at 3 17 22 PM

Is comfyui:pytorch-2.1.0-py3.11-cuda-12.1.0-base-22.04 available❓

Thank you for your great work!

I am using ghcr.io/ai-dock/comfyui:pytorch-2.0.1-py3.10-cuda-11.8.0-base-22.04 now.

It saved me a lot of time ⌚️

I note that pytorch:2.1.0-py3.11-cuda-12.1.0-base-22.04 is available.
and I try to download ghcr.io/ai-dockcomfyui:pytorch-2.1.0-py3.11-cuda-12.1.0-base-22.04
but got error: Error response from daemon: manifest unknown

ComfyUI starts at port 18188 in a container

I'm not sure why, but when I run the container comfyui starts at port 18188 instead of 8188. I have not edited any environment variables, simply just downloaded the image and ran it.

==> /var/log/supervisor/provisioning.log <==
Looking for provisioning.sh...
Not found

==> /var/log/supervisor/redirector.error.log <==
INFO: Started server process [187]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:11111 (Press CTRL+C to quit)

==> /var/log/supervisor/redirector.log <==
Starting redirector server...

==> /var/log/supervisor/sshd.error.log <==
/root/.ssh/authorized_keys is not a public key file.
Skipping SSH server: No public key

==> /var/log/supervisor/sshd.log <==

==> /var/log/supervisor/supervisor.log <==

==> /var/log/supervisor/sync.log <==
No mount: Mamba environments remain in /opt
Creating symlink from /opt/ComfyUI to /workspace/ComfyUI

==> /var/log/supervisor/comfyui.log <==
Total VRAM 14928 MB, total RAM 15000 MB
xformers version: 0.0.22
Set vram state to: NORMAL_VRAM
Device: cuda:0 Tesla T4 : cudaMallocAsync
VAE dtype: torch.float32
Using xformers cross attention
Starting server

To see the GUI go to: http://127.0.0.1:18188

Comfyui keeps restarting on prompt

I just freshly installed ComfyUI with only a CPU on a Ubuntu-Server VM within Proxmox. I had to fiddle around to get it to start with only the cpu but i can't generate any images. If I start the Promt, the server uses all its ram and cpu cores and does not resbond for about 15 minutes.
the docker-compose file i use:

version: "3.8"
# Compose file build variables set in .env
services:
  supervisor:
    restart: always
    platform: linux/amd64
    build:
      context: ./build
      args:
        IMAGE_BASE: ${IMAGE_BASE:-ghcr.io/ai-dock/jupyter-pytorch:2.2.0-py3.10-cpu-22.04}
      #tags:
        #- "ghcr.io/ai-dock/comfyui:${IMAGE_TAG:-jupyter-pytorch-2.2.0-py3.10-cpu-22.04}"
        #- "IMAGE_TAG:-jupyter-pytorch-2.2.0-py3.10-cpu-22.04"
    image: ghcr.io/ai-dock/comfyui:${IMAGE_TAG:-jupyter-pytorch-2.2.0-py3.10-cpu-22.04}
    ## For Nvidia GPU's - You probably want to uncomment this
    #deploy:
    #  resources:
    #    reservations:
    #      devices:
    #        - driver: nvidia
    #          count: all
    #          capabilities: [gpu]
    devices:
      - "/dev/dri:/dev/dri"
      ## For AMD GPU
      #- "/dev/kfd:/dev/kfd"
    volumes:
      ## Workspace
      - ./workspace:${WORKSPACE:-/workspace/}:rshared
      # You can share /workspace/storage with other non-ComfyUI containers. See README
      #- /path/to/common_storage:${WORKSPACE:-/workspace/}storage/:rshared
      # Will echo to root-owned authorized_keys file;
      # Avoids changing local file owner
      - ./config/authorized_keys:/root/.ssh/authorized_keys_mount
      - ./config/provisioning/default.sh:/opt/ai-dock/bin/provisioning.sh
    ports:
        # SSH available on host machine port 2222 to avoid conflict. Change to suit
        - ${SSH_PORT_HOST:-2222}:22
        # Caddy port for service portal
        - ${SERVICEPORTAL_PORT_HOST:-1111}:${SERVICEPORTAL_PORT_HOST:-1111}
        # ComfyUI web interface
        - ${COMFYUI_PORT_HOST:-8188}:${COMFYUI_PORT_HOST:-8188}
        # Jupyter server
        - ${JUPYTER_PORT_HOST:-8888}:${JUPYTER_PORT_HOST:-8888}
        # Rclone webserver for interactive configuration
        - ${RCLONE_PORT_HOST:-53682}:${RCLONE_PORT_HOST:-53682}
    environment:
        # Don't enclose values in quotes
        - DIRECT_ADDRESS=${DIRECT_ADDRESS:-192.168.1.101}
        - DIRECT_ADDRESS_GET_WAN=${DIRECT_ADDRESS_GET_WAN:-false}
        - WORKSPACE=${WORKSPACE:-/workspace}
        - WORKSPACE_SYNC=${WORKSPACE_SYNC:-false}
        #- CF_TUNNEL_TOKEN=${CF_TUNNEL_TOKEN:-}
        - CF_QUICK_TUNNELS=${CF_QUICK_TUNNELS:-false}
        - WEB_ENABLE_AUTH=${WEB_ENABLE_AUTH:-true}
        - WEB_USER=${WEB_USER:-user}
        - WEB_PASSWORD=${WEB_PASSWORD:-password}
        - SSH_PORT_HOST=${SSH_PORT_HOST:-2222}
        - SERVICEPORTAL_PORT_HOST=${SERVICEPORTAL_PORT_HOST:-1111}
        - SERVICEPORTAL_METRICS_PORT=${SERVICEPORTAL_METRICS_PORT:-21111}
        - COMFYUI_FLAGS=${COMFYUI_FLAGS:-}
        - COMFYUI_PORT_HOST=${COMFYUI_PORT_HOST:-8188}
        - COMFYUI_METRICS_PORT=${COMFYUI_METRICS_PORT:-28188}
        - JUPYTER_PORT_HOST=${JUPYTER_PORT_HOST:-8888}
        - JUPYTER_METRICS_PORT=${JUPYTER_METRICS_PORT:-28888}
        - SERVERLESS=${SERVERLESS:-false}

the Dockerfile in the build folder:

  GNU nano 6.2                                                                                                                                            Dockerfile                                                                                                                                                      # For build automation - Allows building from any ai-dock base image
# Use a *cuda*base* image as default because pytorch brings the libs
ARG IMAGE_BASE="ghcr.io/ai-dock/jupyter-pytorch:2.2.0-py3.10-cpu-22.04"
FROM ${IMAGE_BASE}

LABEL org.opencontainers.image.source https://github.com/ai-dock/comfyui
LABEL org.opencontainers.image.description "ComfyUI Stable Diffusion backend and GUI"
LABEL maintainer="Rob Ballantyne <[email protected]>"

ENV IMAGE_SLUG="comfyui"
ENV OPT_SYNC=ComfyUI:serverless

# Copy early so we can use scripts in the build - Changes to these files will invalidate the cache and cause a rebuild.
COPY --chown=0:1111 ./COPY_ROOT/ /

# Use build scripts to ensure we can build all targets from one Dockerfile in a single layer.
# Don't put anything heavy in here - We can use multi-stage building above if necessary.

ARG IMAGE_BASE
RUN set -eo pipefail && /opt/ai-dock/bin/build/layer0/init.sh | tee /var/log/build.log

# Must be set after layer0
ENV MAMBA_DEFAULT_ENV=comfyui
ENV MAMBA_DEFAULT_RUN="micromamba run -n $MAMBA_DEFAULT_ENV"

# Copy overrides and models into later layers for fast rebuilds
COPY --chown=0:1111 ./COPY_ROOT_EXTRA/ /
RUN set -eo pipefail && /opt/ai-dock/bin/build/layer1/init.sh | tee -a /var/log/build.log

# Keep init.sh as-is and place additional logic in /opt/ai-dock/bin/preflight.sh
CMD ["init.sh"]

other than that i just cloned the github repo yesterday.
the error i am getting:

supervisor_1  | got prompt
supervisor_1  | model_type EPS
supervisor_1  | Using split attention in VAE
supervisor_1  | Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
supervisor_1  | Using split attention in VAE
supervisor_1  | 2024-03-10 17:02:44,181 INFO exited: comfyui (exit status 137; not expected)
supervisor_1  | 2024-03-10 17:02:44,632 INFO spawned: 'comfyui' with pid 768
supervisor_1  |
supervisor_1  | ==> /var/log/supervisor/caddy.log <==
supervisor_1  | {"level":"error","ts":1710090163.5710387,"logger":"http.log.error","msg":"dial tcp: lookup localhost: i/o timeout","request":{"remote_ip":"192.168.2.101","remote_port":"53571","client_ip":"192.168.2.101","proto":"HTTP/1.1","method":"POST","host":"192.168.1.101:1111","uri":"/ajax/logs","headers":{"Dnt":["1"],"Accept-Encoding":["gzip, deflate"],"Hx-Target":["page"],"Referer":["http://192.168.1.101:1111/"],"Hx-Current-Url":["http://192.168.1.101:1111/"],"Content-Length":["0"],"Content-Type":["application/x-www-form-urlencoded"],"Cookie":[],"Sec-Gpc":["1"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:123.0) Gecko/20100101 Firefox/123.0"],"Hx-Request":["true"],"Origin":["http://192.168.1.101:1111"],"Connection":["keep-alive"],"Accept":["*/*"],"Accept-Language":["en-US,en;q=0.7,de-AT;q=0.3"]}},"duration":81.496884408,"status":502,"err_id":"y5ew3hics","err_trace":"reverseproxy.statusError (reverseproxy.go:1267)"}
supervisor_1  | {"level":"error","ts":1710090163.9898791,"logger":"http.log.error","msg":"dial tcp 127.0.0.1:18188: connect: connection refused","request":{"remote_ip":"192.168.2.101","remote_port":"53602","client_ip":"192.168.2.101","proto":"HTTP/1.1","method":"GET","host":"192.168.1.101:8188","uri":"/queue","headers":{"Dnt":["1"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:123.0) Gecko/20100101 Firefox/123.0"],"Referer":["http://192.168.1.101:8188/"],"Connection":["keep-alive"],"Cookie":[],"Sec-Gpc":["1"],"Accept":["*/*"],"Accept-Language":["en-US,en;q=0.7,de-AT;q=0.3"],"Accept-Encoding":["gzip, deflate"],"Comfy-User":["undefined"]}},"duration":0.024025992,"status":502,"err_id":"47kp5rvq8","err_trace":"reverseproxy.statusError (reverseproxy.go:1267)"}
supervisor_1  | {"level":"error","ts":1710090163.991298,"logger":"http.log.error","msg":"dial tcp 127.0.0.1:18188: connect: connection refused","request":{"remote_ip":"192.168.2.101","remote_port":"53603","client_ip":"192.168.2.101","proto":"HTTP/1.1","method":"GET","host":"192.168.1.101:8188","uri":"/history?max_items=200","headers":{"Accept-Encoding":["gzip, deflate"],"Comfy-User":["undefined"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:123.0) Gecko/20100101 Firefox/123.0"],"Referer":["http://192.168.1.101:8188/"],"Connection":["keep-alive"],"Cookie":[],"Dnt":["1"],"Sec-Gpc":["1"],"Accept":["*/*"],"Accept-Language":["en-US,en;q=0.7,de-AT;q=0.3"]}},"duration":0.000925198,"status":502,"err_id":"pdq6nmh3q","err_trace":"reverseproxy.statusError (reverseproxy.go:1267)"}
supervisor_1  | {"level":"error","ts":1710090164.604226,"logger":"http.log.error","msg":"dial tcp 127.0.0.1:18188: connect: connection refused","request":{"remote_ip":"192.168.2.101","remote_port":"53604","client_ip":"192.168.2.101","proto":"HTTP/1.1","method":"GET","host":"192.168.1.101:8188","uri":"/ws?clientId=5015f3e3d12b455f90d1aa312ca028ac","headers":{"Accept-Language":["en-US,en;q=0.7,de-AT;q=0.3"],"Sec-Websocket-Key":["dwWr+fR6Q71khkaZvhXXSg=="],"Cookie":[],"Cache-Control":["no-cache"],"Upgrade":["websocket"],"Accept":["*/*"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:123.0) Gecko/20100101 Firefox/123.0"],"Pragma":["no-cache"],"Sec-Websocket-Version":["13"],"Origin":["http://192.168.1.101:8188"],"Sec-Websocket-Extensions":["permessage-deflate"],"Connection":["keep-alive, Upgrade"],"Accept-Encoding":["gzip, deflate"]}},"duration":0.00243078,"status":502,"err_id":"4ezxns4f4","err_trace":"reverseproxy.statusError (reverseproxy.go:1267)"}
supervisor_1  |
supervisor_1  | ==> /var/log/supervisor/comfyui.log <==
supervisor_1  | Starting ComfyUI...
supervisor_1  | Starting ComfyUI...
supervisor_1  |
supervisor_1  | ==> /var/log/supervisor/supervisor.log <==
supervisor_1  | 2024-03-10 17:02:44,181 INFO exited: comfyui (exit status 137; not expected)
supervisor_1  | 2024-03-10 17:02:44,632 INFO spawned: 'comfyui' with pid 768
supervisor_1  |
supervisor_1  | ==> /var/log/supervisor/caddy.log <==
supervisor_1  | {"level":"error","ts":1710090165.2220185,"logger":"http.log.error","msg":"dial tcp 127.0.0.1:18188: connect: connection refused","request":{"remote_ip":"192.168.2.101","remote_port":"53607","client_ip":"192.168.2.101","proto":"HTTP/1.1","method":"GET","host":"192.168.1.101:8188","uri":"/ws?clientId=b083dc43d61044c6ab65b32f1cdde643","headers":{"Pragma":["no-cache"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:123.0) Gecko/20100101 Firefox/123.0"],"Accept":["*/*"],"Origin":["http://192.168.1.101:8188"],"Cookie":[],"Upgrade":["websocket"],"Sec-Websocket-Version":["13"],"Sec-Websocket-Extensions":["permessage-deflate"],"Accept-Encoding":["gzip, deflate"],"Cache-Control":["no-cache"],"Accept-Language":["en-US,en;q=0.7,de-AT;q=0.3"],"Sec-Websocket-Key":["kNwEmADTHsl/vJ/0d/P9Fg=="],"Connection":["keep-alive, Upgrade"]}},"duration":0.001117775,"status":502,"err_id":"sm5g3mwcy","err_trace":"reverseproxy.statusError (reverseproxy.go:1267)"}
supervisor_1  |
supervisor_1  | ==> /var/log/supervisor/comfyui.log <==
supervisor_1  | ** ComfyUI startup time: 2024-03-10 17:02:45.025688
supervisor_1  | ** Platform: Linux
supervisor_1  | ** Python version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0]
supervisor_1  | ** Python executable: /opt/micromamba/envs/comfyui/bin/python
supervisor_1  | ** Log path: /workspace/ComfyUI/comfyui.log
supervisor_1  |
supervisor_1  | Prestartup times for custom nodes:
supervisor_1  |    0.1 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-Manager
supervisor_1  |
supervisor_1  |
supervisor_1  | ==> /var/log/supervisor/caddy.log <==
supervisor_1  | {"level":"error","ts":1710090166.064027,"logger":"http.log.error","msg":"dial tcp 127.0.0.1:18188: connect: connection refused","request":{"remote_ip":"192.168.2.101","remote_port":"53608","client_ip":"192.168.2.101","proto":"HTTP/1.1","method":"GET","host":"192.168.1.101:8188","uri":"/ws?clientId=5015f3e3d12b455f90d1aa312ca028ac","headers":{"Sec-Websocket-Version":["13"],"Sec-Websocket-Extensions":["permessage-deflate"],"Sec-Websocket-Key":["R2uKGn4klgocfuTu+fSF+A=="],"Connection":["keep-alive, Upgrade"],"Accept":["*/*"],"Accept-Language":["en-US,en;q=0.7,de-AT;q=0.3"],"Upgrade":["websocket"],"Cookie":[],"Pragma":["no-cache"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:123.0) Gecko/20100101 Firefox/123.0"],"Accept-Encoding":["gzip, deflate"],"Cache-Control":["no-cache"],"Origin":["http://192.168.1.101:8188"]}},"duration":0.000765085,"status":502,"err_id":"fkc66rk5b","err_trace":"reverseproxy.statusError (reverseproxy.go:1267)"}
supervisor_1  | {"level":"error","ts":1710090167.3226793,"logger":"http.log.error","msg":"dial tcp 127.0.0.1:18188: connect: connection refused","request":{"remote_ip":"192.168.2.101","remote_port":"53609","client_ip":"192.168.2.101","proto":"HTTP/1.1","method":"GET","host":"192.168.1.101:8188","uri":"/ws?clientId=b083dc43d61044c6ab65b32f1cdde643","headers":{"Sec-Websocket-Key":["/7oTZHXZkskiKv9SMGrrew=="],"Accept":["*/*"],"Accept-Encoding":["gzip, deflate"],"Accept-Language":["en-US,en;q=0.7,de-AT;q=0.3"],"Sec-Websocket-Version":["13"],"Pragma":["no-cache"],"Sec-Websocket-Extensions":["permessage-deflate"],"Connection":["keep-alive, Upgrade"],"Cache-Control":["no-cache"],"Upgrade":["websocket"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:123.0) Gecko/20100101 Firefox/123.0"],"Origin":["http://192.168.1.101:8188"],"Cookie":[]}},"duration":0.001707567,"status":502,"err_id":"i6zh30sn6","err_trace":"reverseproxy.statusError (reverseproxy.go:1267)"}
supervisor_1  |
supervisor_1  | ==> /var/log/supervisor/comfyui.log <==
supervisor_1  | Total VRAM 7937 MB, total RAM 7937 MB
supervisor_1  | Set vram state to: DISABLED
supervisor_1  | Device: cpu
supervisor_1  | VAE dtype: torch.float32
supervisor_1  | Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
supervisor_1  | 2024-03-10 17:02:49,846 INFO success: comfyui entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
supervisor_1  |
supervisor_1  | ==> /var/log/supervisor/caddy.log <==
supervisor_1  | {"level":"error","ts":1710090169.2041223,"logger":"http.log.error","msg":"dial tcp 127.0.0.1:18188: connect: connection refused","request":{"remote_ip":"192.168.2.101","remote_port":"53610","client_ip":"192.168.2.101","proto":"HTTP/1.1","method":"GET","host":"192.168.1.101:8188","uri":"/ws?clientId=5015f3e3d12b455f90d1aa312ca028ac","headers":{"Accept-Language":["en-US,en;q=0.7,de-AT;q=0.3"],"Connection":["keep-alive, Upgrade"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:123.0) Gecko/20100101 Firefox/123.0"],"Accept-Encoding":["gzip, deflate"],"Sec-Websocket-Extensions":["permessage-deflate"],"Accept":["*/*"],"Pragma":["no-cache"],"Cache-Control":["no-cache"],"Upgrade":["websocket"],"Sec-Websocket-Version":["13"],"Origin":["http://192.168.1.101:8188"],"Sec-Websocket-Key":["k72s8QRipsT6yoKJQwv9MA=="],"Cookie":[]}},"duration":0.001642951,"status":502,"err_id":"tp6ne2h6p","err_trace":"reverseproxy.statusError (reverseproxy.go:1267)"}
supervisor_1  |
supervisor_1  | ==> /var/log/supervisor/comfyui.log <==
supervisor_1  | ### Loading: ComfyUI-Manager (V2.9)
supervisor_1  |
supervisor_1  | ==> /var/log/supervisor/supervisor.log <==
supervisor_1  | 2024-03-10 17:02:49,846 INFO success: comfyui entered RUNNING state, process has stayed up for > than 5 seconds (startsecs)
supervisor_1  |
supervisor_1  | ==> /var/log/supervisor/comfyui.log <==
supervisor_1  | ### ComfyUI Revision: 2057 [65397ce6] | Released on '2024-03-10'
supervisor_1  |
supervisor_1  | Import times for custom nodes:
supervisor_1  |    0.1 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-Manager
supervisor_1  |
supervisor_1  | Starting server
supervisor_1  |
supervisor_1  | To see the GUI go to: http://127.0.0.1:18188
supervisor_1  | [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
supervisor_1  | [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
supervisor_1  | [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
supervisor_1  | [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json

I probably messed something up with the cpu-only config, but i dont know where. could you please help me and maybe provide better documentation for cpu-only mode?

FileNotFoundError: [Errno 2] No such file or directory: '/usr/share/fonts/truetype'

[ERROR] An error occurred while retrieving information for the 'CR Select Font' node.
supervisor-1  | Traceback (most recent call last):
supervisor-1  |   File "/opt/ComfyUI/server.py", line 420, in get_object_info
supervisor-1  |     out[x] = node_info(x)
supervisor-1  |   File "/opt/ComfyUI/server.py", line 398, in node_info
supervisor-1  |     info['input'] = obj_class.INPUT_TYPES()
supervisor-1  |   File "/opt/ComfyUI/custom_nodes/ComfyUI_Comfyroll_CustomNodes/nodes/nodes_graphics_text.py", line 467, in INPUT_TYPES
supervisor-1  |     file_list = [f for f in os.listdir(font_dir) if os.path.isfile(os.path.join(font_dir, f)) and f.lower().endswith(".ttf")]
supervisor-1  | FileNotFoundError: [Errno 2] No such file or directory: '/usr/share/fonts/truetype'

I now solve the problem by manually copy it

docker cp /usr/share/fonts/truetype/dejavu containerID:/usr/share/fonts/truetype

"/dev/dri": no such file or directory`

Sorry the issue here may be user error, but I'm getting

Error response from daemon: error gathering device information while adding custom device "/dev/dri": no such file or directory

Trying to run on a 2019 i9 Macbook Pro.

Thanks

Found no NVIDIA driver on your system.

I'm getting the error " Found no NVIDIA driver on your system." when spinning up this image on my Mac M1, any ideas on how to fix it?

Traceback (most recent call last):
  File "/opt/ComfyUI/main.py", line 72, in <module>
    import execution
  File "/opt/ComfyUI/execution.py", line 12, in <module>
    import nodes
  File "/opt/ComfyUI/nodes.py", line 20, in <module>
    import comfy.diffusers_load
  File "/opt/ComfyUI/comfy/diffusers_load.py", line 4, in <module>
    import comfy.sd
  File "/opt/ComfyUI/comfy/sd.py", line 5, in <module>
    from comfy import model_management
  File "/opt/ComfyUI/comfy/model_management.py", line 114, in <module>
    total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
  File "/opt/ComfyUI/comfy/model_management.py", line 83, in get_torch_device
    return torch.device(torch.cuda.current_device())
  File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/cuda/__init__.py", line 674, in current_device
    _lazy_init()
  File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/cuda/__init__.py", line 247, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
Starting ComfyUI...
** ComfyUI start up time: 2023-10-31 10:39:32.162144

Prestartup times for custom nodes:
   0.0 seconds: /opt/ComfyUI/custom_nodes/ComfyUI-Manager

Traceback (most recent call last):
  File "/opt/ComfyUI/main.py", line 72, in <module>
    import execution
  File "/opt/ComfyUI/execution.py", line 12, in <module>
    import nodes
  File "/opt/ComfyUI/nodes.py", line 20, in <module>
    import comfy.diffusers_load
  File "/opt/ComfyUI/comfy/diffusers_load.py", line 4, in <module>
    import comfy.sd
  File "/opt/ComfyUI/comfy/sd.py", line 5, in <module>
    from comfy import model_management
  File "/opt/ComfyUI/comfy/model_management.py", line 114, in <module>
    total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
  File "/opt/ComfyUI/comfy/model_management.py", line 83, in get_torch_device
    return torch.device(torch.cuda.current_device())
  File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/cuda/__init__.py", line 674, in current_device
    _lazy_init()
  File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/cuda/__init__.py", line 247, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
Starting ComfyUI...

RuntimeError: Unexpected error from cudaGetDeviceCount()

I'm getting a cuda when running the default jupyter template on Vast.Ai

Starting ComfyUI...
WARNING: No ICDs were found. Either,
- Install a conda package providing a OpenCL implementation (pocl, oclgrind, intel-compute-runtime, beignet) or
- Make your system-wide implementation visible by installing ocl-icd-system conda package.
** ComfyUI startup time: 2024-03-02 21:15:23.498445
** Platform: Linux
** Python version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0]
** Python executable: /opt/micromamba/envs/comfyui/bin/python
** Log path: /opt/ComfyUI/comfyui.log
Prestartup times for custom nodes:
   0.0 seconds: /opt/ComfyUI/custom_nodes/ComfyUI-Manager
Traceback (most recent call last):
  File "/opt/ComfyUI/main.py", line 76, in
    import execution
  File "/opt/ComfyUI/execution.py", line 11, in
    import nodes
  File "/opt/ComfyUI/nodes.py", line 20, in
    import comfy.diffusers_load
  File "/opt/ComfyUI/comfy/diffusers_load.py", line 3, in
    import comfy.sd
  File "/opt/ComfyUI/comfy/sd.py", line 4, in
    from comfy import model_management
  File "/opt/ComfyUI/comfy/model_management.py", line 118, in
    total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
  File "/opt/ComfyUI/comfy/model_management.py", line 87, in get_torch_device
    return torch.device(torch.cuda.current_device())
  File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/cuda/__init__.py", line 787, in current_device
    _lazy_init()
  File "/opt/micromamba/envs/comfyui/lib/python3.10/site-packages/torch/cuda/__init__.py", line 302, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 804: forward compatibility was attempted on non supported HW
/opt/ai-dock/bin/supervisor-comfyui.sh: line 17:   831 Killed                  /usr/bin/python3 /opt/ai-dock/fastapi/logviewer/main.py -p $LISTEN_PORT -r 5 -s "${SERVICE_NAME}" -t "Preparing ${SERVICE_NAME}"  (wd: /root)

how-to WORKSPACE_SYNC

my step :

use runpod template

ghcr.io/ai-dock/comfyui:latest
without PROVISIONING_SCRIPT
WORKSPACE_SYNC first time true , after true/false result down

result :

  • system logs:
2024-02-29T07:57:13Z create pod network
2024-02-29T07:57:13Z create container ghcr.io/ai-dock/comfyui:latest
2024-02-29T07:57:14Z latest Pulling from ai-dock/comfyui
2024-02-29T07:57:14Z Digest: sha256:47f033dddfba20771da60cfc3bc403812f88cb5b3b03ce6e17de7fa9272e3a50
2024-02-29T07:57:14Z Status: Image is up to date for ghcr.io/ai-dock/comfyui:latest
2024-02-29T07:57:14Z start container
  • container logs:
...
2024-02-29T07:51:18.335204123Z 
2024-02-29T07:51:18.626160221Z chown: changing ownership of '/workspace/': Operation not permitted
2024-02-29T07:51:18.653602734Z chown: changing ownership of '/workspace//.ai-dock-permissions-test': Operation not permitted
2024-02-29T07:51:19.416355191Z chown: changing ownership of '/workspace/home/user': Operation not permitted
2024-02-29T07:51:19.451593327Z useradd: warning: the home directory /workspace/home/user already exists.
2024-02-29T07:51:19.451656302Z useradd: Not copying any file from skel directory into it.
2024-02-29T07:51:19.529256483Z usermod: group 'render' does not exist
2024-02-29T07:51:19.534188717Z usermod: group 'sgx' does not exist
2024-02-29T07:51:19.544032612Z mkdir: cannot create directory ‘/workspace/home/user/.ssh’: File exists
2024-02-29T07:51:19.551713395Z cp: not writing through dangling symlink '/workspace/home/user/.ssh'
2024-02-29T07:51:19.559099834Z chown: changing ownership of '/workspace/home/user/.ssh': Operation not permitted
2024-02-29T07:51:19.566694095Z chmod: cannot access '/workspace/home/user/.ssh/authorized_keys': No such file or directory
2024-02-29T07:51:19.589076075Z chmod: cannot access '/home/user-linux/.ssh/authorized_keys': Too many levels of symbolic links

my goal install comfyui deps and when run, only extract docker image. save my install deps for custom_nodes.

first time i need use WORKSPACE_SYNC as true or false for everytime save my install deps

PS whether it's important to use any PROVISIONING_SCRIPT or can broken link ?

How to generate an image programatically?

Given the system is running, and I'm able to generate images with ComfyUI.
I'd like to generate an image programatically:

  • with a simple python script
  • using an existing workflow
  • any web API capabilities ?

I'd like to avoid mixing up things inside the container. What is to be installed or required to at least generate an image with python using the current setup ?
many thanks !

comfy template on VastAI outdated

I am quite a layman using Vast AI, and since 5 days ago there was an update here in Docker I am no longer able to use the comfy la template through vast ai, when making requests it always returns a login image with the url / login, could you help me?

[Feature request] small docker image for workspace_sync=true

My clients complain about long loading times, and I'm a bit surprised myself. Why do I have to download a full image including all dependencies and software every time I run it, if I have 90% of what I need already in my workspace (the biggest packages - torch, models, etc.)?

if you are unlucky with hardware, it will already have users and disk speeds, and network speeds may not be the most comfortable (and we are talking only about official runpod partners, not the community.

maybe it makes sense to make a stripped down version of the image for mode WORKSPACE_SYNC=true.
what do you think about that?

runpod serverless permission error

Hi. I've followed guide, added some model but I am getting this errrors..
It was okay previously, I am not sure if its related with runpod or else.

2024-02-09T13:31:26.016742103Z ==> /var/log/supervisor/storagemonitor.log <==
2024-02-09T13:31:26.016748663Z error: could not lock config file /home/user/.gitconfig: Permission denied
2024-02-09T13:31:26.016753303Z Starting storage monitor..
2024-02-09T13:31:26.016757913Z ln: failed to create symbolic link '/runpod-volume/storage/README': Permission denied
2024-02-09T13:31:26.016763463Z mkdir: cannot create directory ‘/runpod-volume/storage/stable_diffusion’: Permission denied
2024-02-09T13:31:26.016777443Z ln: failed to create symbolic link '/runpod-volume/storage/stable_diffusion/models/controlnet/control_v11p_sd15_openpose.pth': No such file or directory
2024-02-09T13:31:26.016781692Z mkdir: cannot create directory ‘/runpod-volume/storage/stable_diffusion’: Permission denied
2024-02-09T13:31:26.016785932Z ln: failed to create symbolic link '/runpod-volume/storage/stable_diffusion/models/embeddings/bad_prompt_version2-neg.pt': No such file or directory
2024-02-09T13:31:26.016790592Z mkdir: cannot create directory ‘/runpod-volume/storage/stable_diffusion’: Permission denied
2024-02-09T13:31:26.016795122Z ln: failed to create symbolic link '/runpod-volume/storage/stable_diffusion/models/embeddings/easynegative.safetensors': No such file or directory
2024-02-09T13:31:26.016798732Z mkdir: cannot create directory ‘/runpod-volume/storage/stable_diffusion’: Permission denied
2024-02-09T13:31:26.016802422Z ln: failed to create symbolic link '/runpod-volume/storage/stable_diffusion/models/embeddings/ng_deepnegative_v1_75t.pt': No such file or directory
2024-02-09T13:31:26.016805872Z mkdir: cannot create directory ‘/runpod-volume/storage/stable_diffusion’: Permission denied
2024-02-09T13:31:26.016811022Z ln: failed to create symbolic link '/runpod-volume/storage/stable_diffusion/models/ultralytics/bbox/face_yolov8m.pt': No such file or directory
2024-02-09T13:31:26.016816342Z mkdir: cannot create directory ‘/runpod-volume/storage/stable_diffusion’: Permission denied
2024-02-09T13:31:26.016821142Z ln: failed to create symbolic link '/runpod-volume/storage/stable_diffusion/models/ultralytics/segm/sam_vit_b_01ec64.pth': No such file or directory
2024-02-09T13:31:26.016825712Z mkdir: cannot create directory ‘/runpod-volume/storage/stable_diffusion’: Permission denied
2024-02-09T13:31:26.016830732Z ln: failed to create symbolic link '/runpod-volume/storage/stable_diffusion/models/vae/vae-ft-mse-840000-ema-pruned.safetensors': No such file or directory
2024-02-09T13:31:26.016841892Z Setting up watches.  Beware: since -r was given, this may take a while!
2024-02-09T13:31:26.016846572Z Watches established.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.