Giter VIP home page Giter VIP logo

cog-comfyui's People

Contributors

davut avatar digitaljohn avatar fofr avatar zsxkib avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cog-comfyui's Issues

Making it suitable for my own workflow

Hi there,

I want to give this template a try to run my workflow on Replicate. However, there are quite a few custom nodes installed that my workflow does not use. I was wondering if there are any benefits of forking this repository and stripping out the checkpoints and nodes that I do need, in terms of computing time (Replicate costs).

If I do fork the repository, I would like to add a checkpoint myself but I can't figure out how you actually add those. I see the list of checkpoints in weights.json but I do not see any CivitAI e.g. download URLs so are these safetensors files hosted somewhere else to download?

Ideal situation for me would be:

  • Fork the repository, remove the custom nodes, checkpoints and LoRa’s that my workflow doesn’t use. Not sure if this has any benefits in terms of performance.
  • Add the checkpoint, LoRa and upscaler models myself through some downloadable URL (CivitAI or Huggingface).

It would also be great (for my use case) to ‘bake’ my checkpoint in the image to avoid any downloading time when the generation starts and avoid extra Replicate costs. But not quite sure how to do this either since I saw that most weights are being downloaded on runtime (when a prediction starts).

Video Support?

Hey,

In this article, it states that we can use a video as input; however, when I run the fofr/any-comfyui-workflow API, I receive the following error message.

Prediction failed.
Unsupported file type: .mp4

Could you please let me know which video formats you support?

Replicate Run error

Hi, Do you know why I got this error?

File "/root/.pyenv/versions/3.10.6/lib/python3.10/urllib/request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 500: Internal Server Error

Best

Custom nodes request: KJNodes for rgb masks

It would be great to have KJNodes included as it has the mask tools like rgb to mask that is used with AnimateDiff videos.
https://github.com/kijai/ComfyUI-KJNodes

Nice to have additions would be:
Rgthree-comfy mutie and bypass node utilities
https://github.com/rgthree/rgthree-comfy/tree/main

cg-use-everywhere seed everywhere node is damn useful when there is upscalers in the workflow. Unless there already is a node that does the same.
https://github.com/chrisgoringe/cg-use-everywhere

Example LoRAs not available

Awesome cog, thanks!

When reproducing the example below on fofr/any-comfyui-workflow:34e27814 I get the following even though I see you added photomaker support.. Any ideas why?

Prediction failed.

photomaker-v1.bin unavailable. View the list of available weights: https://github.com/fofr/cog-comfyui/blob/main/supported_weights.md
{
  "3": {
    "inputs": {
      "seed": 816687600925376,
      "steps": 20,
      "cfg": 5,
      "sampler_name": "euler",
      "scheduler": "normal",
      "denoise": 1,
      "model": [
        "53",
        0
      ],
      "positive": [
        "54",
        0
      ],
      "negative": [
        "7",
        0
      ],
      "latent_image": [
        "5",
        0
      ]
    },
    "class_type": "KSampler",
    "_meta": {
      "title": "KSampler"
    }
  },
  "4": {
    "inputs": {
      "ckpt_name": "RealVisXL_V3.0.safetensors"
    },
    "class_type": "CheckpointLoaderSimple",
    "_meta": {
      "title": "Load Checkpoint"
    }
  },
  "5": {
    "inputs": {
      "width": 1024,
      "height": 1024,
      "batch_size": 1
    },
    "class_type": "EmptyLatentImage",
    "_meta": {
      "title": "Empty Latent Image"
    }
  },
  "6": {
    "inputs": {
      "text": [
        "29",
        0
      ],
      "clip": [
        "4",
        1
      ]
    },
    "class_type": "CLIPTextEncode",
    "_meta": {
      "title": "CLIP Text Encode (Prompt)"
    }
  },
  "7": {
    "inputs": {
      "text": [
        "70",
        1
      ],
      "clip": [
        "4",
        1
      ]
    },
    "class_type": "CLIPTextEncode",
    "_meta": {
      "title": "CLIP Text Encode (Prompt)"
    }
  },
  "8": {
    "inputs": {
      "samples": [
        "3",
        0
      ],
      "vae": [
        "4",
        2
      ]
    },
    "class_type": "VAEDecode",
    "_meta": {
      "title": "VAE Decode"
    }
  },
  "20": {
    "inputs": {
      "add_noise": "enable",
      "noise_seed": 816687600925376,
      "steps": 20,
      "cfg": 5,
      "sampler_name": "euler",
      "scheduler": "normal",
      "start_at_step": 0,
      "end_at_step": 10,
      "return_with_leftover_noise": "enable",
      "model": [
        "53",
        0
      ],
      "positive": [
        "6",
        0
      ],
      "negative": [
        "7",
        0
      ],
      "latent_image": [
        "5",
        0
      ]
    },
    "class_type": "KSamplerAdvanced",
    "_meta": {
      "title": "KSampler (Advanced)"
    }
  },
  "21": {
    "inputs": {
      "add_noise": "disable",
      "noise_seed": 816687600925376,
      "steps": 20,
      "cfg": 5,
      "sampler_name": "euler",
      "scheduler": "normal",
      "start_at_step": 10,
      "end_at_step": 10000,
      "return_with_leftover_noise": "disable",
      "model": [
        "53",
        0
      ],
      "positive": [
        "54",
        0
      ],
      "negative": [
        "7",
        0
      ],
      "latent_image": [
        "20",
        0
      ]
    },
    "class_type": "KSamplerAdvanced",
    "_meta": {
      "title": "KSampler (Advanced)"
    }
  },
  "26": {
    "inputs": {
      "samples": [
        "21",
        0
      ],
      "vae": [
        "4",
        2
      ]
    },
    "class_type": "VAEDecode",
    "_meta": {
      "title": "VAE Decode"
    }
  },
  "29": {
    "inputs": {
      "action": "replace",
      "tidy_tags": "no",
      "text_a": [
        "70",
        0
      ],
      "text_b": "/ img/",
      "text_c": ""
    },
    "class_type": "StringFunction|pysssss",
    "_meta": {
      "title": "String Function 🐍"
    }
  },
  "31": {
    "inputs": {
      "image1": [
        "32",
        0
      ],
      "image2": [
        "33",
        0
      ]
    },
    "class_type": "ImageBatch",
    "_meta": {
      "title": "Batch Images"
    }
  },
  "32": {
    "inputs": {
      "image": "https://replicate.delivery/pbxt/KFNauNDOaImWQhgoxtVSVCvBg5nk32c2x6CdclmSOGm8m5e0/Screenshot%202024-01-18%20at%2009.44.44.png",
      "upload": "image"
    },
    "class_type": "LoadImage",
    "_meta": {
      "title": "Load Image"
    }
  },
  "33": {
    "inputs": {
      "image": "https://replicate.delivery/pbxt/KFNaudqUV40O5ua2X6F52W7uw7dCkAn8bPTczl7mzZWBbgKY/Screenshot%202024-01-18%20at%2009.44.50.png",
      "upload": "image"
    },
    "class_type": "LoadImage",
    "_meta": {
      "title": "Load Image"
    }
  },
  "34": {
    "inputs": {
      "images": [
        "47",
        0
      ]
    },
    "class_type": "PreviewImage",
    "_meta": {
      "title": "Preview Image"
    }
  },
  "38": {
    "inputs": {
      "interpolation": "LANCZOS",
      "crop_position": "top",
      "sharpening": 0,
      "image": [
        "32",
        0
      ]
    },
    "class_type": "PrepImageForClipVision",
    "_meta": {
      "title": "Prepare Image For Clip Vision"
    }
  },
  "39": {
    "inputs": {
      "interpolation": "LANCZOS",
      "crop_position": "top",
      "sharpening": 0,
      "image": [
        "33",
        0
      ]
    },
    "class_type": "PrepImageForClipVision",
    "_meta": {
      "title": "Prepare Image For Clip Vision"
    }
  },
  "42": {
    "inputs": {
      "image1": [
        "38",
        0
      ],
      "image2": [
        "39",
        0
      ]
    },
    "class_type": "ImageBatch",
    "_meta": {
      "title": "Batch Images"
    }
  },
  "45": {
    "inputs": {
      "image": "https://replicate.delivery/pbxt/KFNauL1WWGAsKYtrxKwCP4wvv66ecsYIepNikgMRe7vVR58q/Screenshot%202024-01-18%20at%2009.45.06.png",
      "upload": "image"
    },
    "class_type": "LoadImage",
    "_meta": {
      "title": "Load Image"
    }
  },
  "46": {
    "inputs": {
      "interpolation": "LANCZOS",
      "crop_position": "top",
      "sharpening": 0,
      "image": [
        "45",
        0
      ]
    },
    "class_type": "PrepImageForClipVision",
    "_meta": {
      "title": "Prepare Image For Clip Vision"
    }
  },
  "47": {
    "inputs": {
      "image1": [
        "42",
        0
      ],
      "image2": [
        "46",
        0
      ]
    },
    "class_type": "ImageBatch",
    "_meta": {
      "title": "Batch Images"
    }
  },
  "48": {
    "inputs": {
      "image1": [
        "31",
        0
      ],
      "image2": [
        "45",
        0
      ]
    },
    "class_type": "ImageBatch",
    "_meta": {
      "title": "Batch Images"
    }
  },
  "53": {
    "inputs": {
      "lora_name": "photomaker-v1.bin",
      "strength_model": 1,
      "model": [
        "4",
        0
      ]
    },
    "class_type": "LoraLoaderModelOnly",
    "_meta": {
      "title": "LoraLoaderModelOnly"
    }
  },
  "54": {
    "inputs": {
      "trigger_word": "img",
      "text": [
        "70",
        0
      ],
      "clip": [
        "4",
        1
      ],
      "photomaker": [
        "56",
        0
      ],
      "image": [
        "47",
        0
      ]
    },
    "class_type": "PhotoMakerEncode",
    "_meta": {
      "title": "PhotoMaker Encode"
    }
  },
  "56": {
    "inputs": {
      "name": "photomaker-v1.bin"
    },
    "class_type": "PhotoMakerLoader",
    "_meta": {
      "title": "Load PhotoMaker"
    }
  },
  "57": {
    "inputs": {
      "path": "",
      "interpolation": "lanczos",
      "crop_position": "center"
    },
    "class_type": "PrepImagesForClipVisionFromPath",
    "_meta": {
      "title": "Prepare Images For ClipVision From Path"
    }
  },
  "59": {
    "inputs": {
      "conditioning_1": [
        "60",
        0
      ],
      "conditioning_2": [
        "62",
        0
      ]
    },
    "class_type": "ConditioningCombine",
    "_meta": {
      "title": "Conditioning (Combine)"
    }
  },
  "60": {
    "inputs": {
      "start": 0,
      "end": 0.5,
      "conditioning": [
        "6",
        0
      ]
    },
    "class_type": "ConditioningSetTimestepRange",
    "_meta": {
      "title": "ConditioningSetTimestepRange"
    }
  },
  "62": {
    "inputs": {
      "start": 0.5,
      "end": 1,
      "conditioning": [
        "54",
        0
      ]
    },
    "class_type": "ConditioningSetTimestepRange",
    "_meta": {
      "title": "ConditioningSetTimestepRange"
    }
  },
  "63": {
    "inputs": {
      "seed": 816687600925376,
      "steps": 20,
      "cfg": 5,
      "sampler_name": "euler",
      "scheduler": "normal",
      "denoise": 1,
      "model": [
        "53",
        0
      ],
      "positive": [
        "59",
        0
      ],
      "negative": [
        "7",
        0
      ],
      "latent_image": [
        "5",
        0
      ]
    },
    "class_type": "KSampler",
    "_meta": {
      "title": "KSampler"
    }
  },
  "65": {
    "inputs": {
      "samples": [
        "63",
        0
      ],
      "vae": [
        "4",
        2
      ]
    },
    "class_type": "VAEDecode",
    "_meta": {
      "title": "VAE Decode"
    }
  },
  "67": {
    "inputs": {
      "filename_prefix": "ComfyUI",
      "images": [
        "8",
        0
      ]
    },
    "class_type": "SaveImage",
    "_meta": {
      "title": "Save Image"
    }
  },
  "68": {
    "inputs": {
      "filename_prefix": "ComfyUI",
      "images": [
        "26",
        0
      ]
    },
    "class_type": "SaveImage",
    "_meta": {
      "title": "Save Image"
    }
  },
  "69": {
    "inputs": {
      "filename_prefix": "ComfyUI",
      "images": [
        "65",
        0
      ]
    },
    "class_type": "SaveImage",
    "_meta": {
      "title": "Save Image"
    }
  },
  "70": {
    "inputs": {
      "style_name": "Photographic (Default)",
      "positive": "photo of a man img, high quality, film grain",
      "negative": "asymmetry, worst quality, low quality, illustration, 3d, 2d, painting, cartoons, sketch), open mouth"
    },
    "class_type": "PhotoMakerStyles",
    "_meta": {
      "title": "Apply PhotoMaker Style"
    }
  },
  "72": {
    "inputs": {
      "value": ""
    },
    "class_type": "String",
    "_meta": {
      "title": "String"
    }
  }
}

Request for additional weights

Hello,
Thanks a lot for creating the project. Really appreciate - good job!:)

Could you please add some extra elements?

Upscaler:

  • ESRGAN/4x-UltraMix_Smooth.pth

Adetailer models

  • hand_yolov8s.pt
  • face_yolov8m.pt
  • person_yolov8m-seg.pt

Segmentation

  • sam_vit_b_01ec64.pth

I'm trying to run this workflow:

{ "73": { "inputs": { "ckpt_name": "epicrealism_naturalSinRC1VAE.safetensors", "vae_name": "Baked VAE", "clip_skip": -1, "lora_name": "None", "lora_model_strength": 0.03, "lora_clip_strength": 2, "positive": "cat", "negative": "dog", "token_normalization": "none", "weight_interpretation": "comfy", "empty_latent_width": 832, "empty_latent_height": 960, "batch_size": 1 }, "class_type": "Efficient Loader", "_meta": { "title": "Efficient Loader" } }, "74": { "inputs": { "seed": 584918518232653, "steps": 30, "cfg": 7, "sampler_name": "euler_ancestral", "scheduler": "karras", "denoise": 1, "preview_method": "auto", "vae_decode": "true", "model": [ "73", 0 ], "positive": [ "73", 1 ], "negative": [ "73", 2 ], "latent_image": [ "73", 3 ], "optional_vae": [ "73", 4 ] }, "class_type": "KSampler (Efficient)", "_meta": { "title": "KSampler (Efficient)" } }, "75": { "inputs": { "samples": [ "74", 3 ], "vae": [ "74", 4 ] }, "class_type": "VAEDecode", "_meta": { "title": "VAE Decode" } }, "81": { "inputs": { "upscale_by": 2, "seed": 838508927827370, "steps": 40, "cfg": 8, "sampler_name": "euler_ancestral", "scheduler": "karras", "denoise": 0.2, "mode_type": "Linear", "tile_width": 512, "tile_height": 512, "mask_blur": 8, "tile_padding": 32, "seam_fix_mode": "None", "seam_fix_denoise": 1, "seam_fix_width": 64, "seam_fix_mask_blur": 8, "seam_fix_padding": 16, "force_uniform_tiles": true, "tiled_decode": false, "image": [ "143", 0 ], "model": [ "74", 0 ], "positive": [ "74", 1 ], "negative": [ "74", 2 ], "vae": [ "74", 4 ], "upscale_model": [ "82", 0 ] }, "class_type": "UltimateSDUpscale", "_meta": { "title": "Ultimate SD Upscale" } }, "82": { "inputs": { "model_name": "ESRGAN/4x-UltraMix_Smooth.pth" }, "class_type": "UpscaleModelLoader", "_meta": { "title": "Load Upscale Model" } }, "83": { "inputs": { "filename_prefix": "ComfyUI", "images": [ "81", 0 ] }, "class_type": "SaveImage", "_meta": { "title": "Save Image" } }, "127": { "inputs": { "wildcard": "", "Select to add LoRA": "Select the LoRA to add to the text", "Select to add Wildcard": "Select the Wildcard to add to the text", "model": [ "74", 0 ], "clip": [ "73", 5 ], "vae": [ "73", 4 ], "positive": [ "73", 1 ], "negative": [ "73", 2 ], "bbox_detector": [ "128", 0 ], "sam_model_opt": [ "129", 0 ], "segm_detector_opt": [ "130", 1 ] }, "class_type": "ToDetailerPipe", "_meta": { "title": "ToDetailerPipe" } }, "128": { "inputs": { "model_name": "bbox/face_yolov8m.pt" }, "class_type": "UltralyticsDetectorProvider", "_meta": { "title": "UltralyticsDetectorProvider" } }, "129": { "inputs": { "model_name": "sam_vit_b_01ec64.pth", "device_mode": "AUTO" }, "class_type": "SAMLoader", "_meta": { "title": "SAMLoader (Impact)" } }, "130": { "inputs": { "model_name": "segm/person_yolov8m-seg.pt" }, "class_type": "UltralyticsDetectorProvider", "_meta": { "title": "UltralyticsDetectorProvider" } }, "131": { "inputs": { "guide_size": 768, "guide_size_for": true, "max_size": 1024, "seed": 838508927827370, "steps": 30, "cfg": 8, "sampler_name": "euler_ancestral", "scheduler": "karras", "denoise": 0.5, "feather": 5, "noise_mask": true, "force_inpaint": false, "bbox_threshold": 0.5, "bbox_dilation": 10, "bbox_crop_factor": 3, "sam_detection_hint": "center-1", "sam_dilation": 0, "sam_threshold": 0.93, "sam_bbox_expansion": 0, "sam_mask_hint_threshold": 0.7, "sam_mask_hint_use_negative": "False", "drop_size": 10, "refiner_ratio": 0.2, "cycle": 1, "inpaint_model": false, "noise_mask_feather": 10, "image": [ "75", 0 ], "detailer_pipe": [ "127", 0 ] }, "class_type": "FaceDetailerPipe", "_meta": { "title": "FaceDetailer (pipe)" } }, "133": { "inputs": { "images": [ "131", 1 ] }, "class_type": "PreviewImage", "_meta": { "title": "Preview Image" } }, "137": { "inputs": { "masks": [ "131", 3 ] }, "class_type": "Convert Masks to Images", "_meta": { "title": "Convert Masks to Images" } }, "138": { "inputs": { "images": [ "137", 0 ] }, "class_type": "PreviewImage", "_meta": { "title": "Preview Image" } }, "139": { "inputs": { "wildcard": "perfect hands ", "Select to add LoRA": "Select the LoRA to add to the text", "Select to add Wildcard": "Select the Wildcard to add to the text", "model": [ "74", 0 ], "clip": [ "73", 5 ], "vae": [ "73", 4 ], "positive": [ "73", 1 ], "negative": [ "73", 2 ], "bbox_detector": [ "140", 0 ], "sam_model_opt": [ "141", 0 ], "segm_detector_opt": [ "142", 1 ] }, "class_type": "ToDetailerPipe", "_meta": { "title": "ToDetailerPipe" } }, "140": { "inputs": { "model_name": "bbox/hand_yolov8s.pt" }, "class_type": "UltralyticsDetectorProvider", "_meta": { "title": "UltralyticsDetectorProvider" } }, "141": { "inputs": { "model_name": "sam_vit_b_01ec64.pth", "device_mode": "AUTO" }, "class_type": "SAMLoader", "_meta": { "title": "SAMLoader (Impact)" } }, "142": { "inputs": { "model_name": "bbox/hand_yolov8s.pt" }, "class_type": "UltralyticsDetectorProvider", "_meta": { "title": "UltralyticsDetectorProvider" } }, "143": { "inputs": { "guide_size": 768, "guide_size_for": true, "max_size": 1024, "seed": 730958200346548, "steps": 30, "cfg": 8, "sampler_name": "euler_ancestral", "scheduler": "karras", "denoise": 0.5, "feather": 5, "noise_mask": true, "force_inpaint": false, "bbox_threshold": 0.5, "bbox_dilation": 10, "bbox_crop_factor": 3, "sam_detection_hint": "center-1", "sam_dilation": 0, "sam_threshold": 0.93, "sam_bbox_expansion": 0, "sam_mask_hint_threshold": 0.7, "sam_mask_hint_use_negative": "False", "drop_size": 10, "refiner_ratio": 0.2, "cycle": 1, "inpaint_model": true, "noise_mask_feather": 10, "image": [ "131", 0 ], "detailer_pipe": [ "139", 0 ] }, "class_type": "FaceDetailerPipe", "_meta": { "title": "FaceDetailer (pipe)" } }, "144": { "inputs": { "images": [ "143", 1 ] }, "class_type": "PreviewImage", "_meta": { "title": "Preview Image" } }, "149": { "inputs": { "masks": [ "143", 3 ] }, "class_type": "Convert Masks to Images", "_meta": { "title": "Convert Masks to Images" } }, "150": { "inputs": { "images": [ "149", 0 ] }, "class_type": "PreviewImage", "_meta": { "title": "Preview Image" } } }

Thanks a lot,
W

How can I add more weights and custom nodes myself?

First of all, I would like to say that this project is wonderful.
But I think it's unsustainable to expect the repo maintainer to keep adding more weights and custom nodes as requested by users.

Is there a document describing how to add custom weights and nodes?
If there aren't, it'd be great if you can write it.

Before writing the document, I also appreciate if you explain briefly how to add custom weights and nodes here. Thanks!

Useful Components to Add.

Hi, Thank you for the wonderful project!
It would be great if you could add the following open-source components.

Nodes *Available in Comfy Manager
Comfyroll Studio: https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes
Safety Checker: https://github.com/42lux/ComfyUI-safety-checker

Model
AbsoluteReality: https://civitai.com/models/81458/absolutereality

Embedding Picker
FastNegativeV2: https://civitai.com/models/71961/fast-negative-embedding-fastnegativev2

Lora
Detail Enhancer: https://civitai.com/models/82098/add-more-details-detail-enhancer-tweaker-lora

Thanks a lot!

faceid LoRAs

I do not think these are useable? They need to reside in the loras folder, not the ipadapters folder.

Request for addirional weights and custom node

Hey!
I really appreciate the job you are doing with enabling the comfyui workflows on replicate. Thank you for making my life easier :)

I would like to kindly ask if you could add one extra node and models?

I'm intending to run this workflow:

{
"4": {
"inputs": {
"ckpt_name": "v1/epicrealism_naturalSinRC1VAE.safetensors"
},
"class_type": "CheckpointLoaderSimple",
"_meta": {
"title": "Load Checkpoint"
}
},
"5": {
"inputs": {
"width": 712,
"height": 944,
"batch_size": 1
},
"class_type": "EmptyLatentImage",
"_meta": {
"title": "Empty Latent Image"
}
},
"6": {
"inputs": {
"text": "photo of very skinny russian Анастасия Абакумов \n yong adult female, russian,skinny,(khaki eyes, long ginger hair:1.2), smile, in the empty white room, plain tshirt, wearing clothes, hyperdetailed photography, skin details,grainy,stunning, absurdes, official art, masterpiece, best quality, unity 8k wallpaper, ((photorealistic:1.4)), ultra detailed,perfect skin, (high_quality_masterpiece:1.2), soft rim lightning, modeling, closed eyes, open lips, plating with hair",
"clip": [
"4",
1
]
},
"class_type": "CLIPTextEncode",
"_meta": {
"title": "CLIP Text Encode (Prompt)"
}
},
"7": {
"inputs": {
"text": "hands, elements in background, kid, child, children, NFSW, naked, nude, boobs, nipple, pussy, ass, bad anatomy, bad eyes, bad hands, bad nose, bad lips, poor quality, extra body parts, missing body parts, missing fingers, missing hands and legs, missing facial features, twisted body parts, twisted torso, unreal skin texture, poorly rendered clothes, extra body parts, extra hands, extra fingers, extra legs, extra abdomen, distorted body parts, distorted face, distorted arms, distorted fingers, amputation, disjointed body parts, disproportionate body parts asymmetrical body parts, bad lighting, multiple subjects, subject out of frame, shadows",
"clip": [
"4",
1
]
},
"class_type": "CLIPTextEncode",
"_meta": {
"title": "CLIP Text Encode (Prompt)"
}
},
"42": {
"inputs": {
"seed": [
"99",
2
],
"steps": 30,
"cfg": 7,
"sampler_name": "euler_ancestral",
"scheduler": "karras",
"denoise": 1,
"model": [
"4",
0
],
"positive": [
"104",
0
],
"negative": [
"7",
0
],
"latent_image": [
"5",
0
]
},
"class_type": "KSampler",
"_meta": {
"title": "KSampler"
}
},
"46": {
"inputs": {
"samples": [
"42",
0
],
"vae": [
"4",
2
]
},
"class_type": "VAEDecode",
"_meta": {
"title": "VAE Decode"
}
},
"47": {
"inputs": {
"filename_prefix": "FaceIDPlus_v2_04",
"images": [
"112",
0
]
},
"class_type": "SaveImage",
"_meta": {
"title": "Save Image"
}
},
"99": {
"inputs": {
"number_type": "integer",
"number": 780950033637.5
},
"class_type": "Constant Number",
"_meta": {
"title": "Constant Number"
}
},
"100": {
"inputs": {
"detect_hand": "enable",
"detect_body": "enable",
"detect_face": "enable",
"resolution": 512,
"image": [
"107",
0
]
},
"class_type": "OpenposePreprocessor",
"_meta": {
"title": "OpenPose Pose"
}
},
"101": {
"inputs": {
"image": "Zrzut ekranu 2024-03-4 o 23.52.28.png",
"upload": "image"
},
"class_type": "LoadImage",
"_meta": {
"title": "Load Image"
}
},
"102": {
"inputs": {
"images": [
"100",
0
]
},
"class_type": "PreviewImage",
"_meta": {
"title": "Preview Image"
}
},
"104": {
"inputs": {
"strength": 0.7000000000000001,
"conditioning": [
"6",
0
],
"control_net": [
"105",
0
],
"image": [
"100",
0
]
},
"class_type": "ControlNetApply",
"_meta": {
"title": "Apply ControlNet"
}
},
"105": {
"inputs": {
"control_net_name": "v1/control_v11p_sd15_openpose.pth"
},
"class_type": "ControlNetLoader",
"_meta": {
"title": "Load ControlNet Model"
}
},
"107": {
"inputs": {
"left": 128,
"top": 128,
"right": 128,
"bottom": 128,
"color": "#ffffff",
"transparent": false,
"image": [
"101",
0
]
},
"class_type": "comfyui-easy-padding",
"_meta": {
"title": "ComfyUI Easy Padding"
}
},
"108": {
"inputs": {
"wildcard": "",
"Select to add LoRA": "Select the LoRA to add to the text",
"Select to add Wildcard": "Select the Wildcard to add to the text",
"model": [
"4",
0
],
"clip": [
"4",
1
],
"vae": [
"4",
2
],
"positive": [
"6",
0
],
"negative": [
"6",
0
],
"bbox_detector": [
"109",
0
],
"sam_model_opt": [
"110",
0
],
"segm_detector_opt": [
"111",
1
]
},
"class_type": "ToDetailerPipe",
"_meta": {
"title": "ToDetailerPipe"
}
},
"109": {
"inputs": {
"model_name": "bbox/face_yolov8m.pt"
},
"class_type": "UltralyticsDetectorProvider",
"_meta": {
"title": "UltralyticsDetectorProvider"
}
},
"110": {
"inputs": {
"model_name": "sam_vit_b_01ec64.pth",
"device_mode": "AUTO"
},
"class_type": "SAMLoader",
"_meta": {
"title": "SAMLoader (Impact)"
}
},
"111": {
"inputs": {
"model_name": "segm/person_yolov8m-seg.pt"
},
"class_type": "UltralyticsDetectorProvider",
"_meta": {
"title": "UltralyticsDetectorProvider"
}
},
"112": {
"inputs": {
"guide_size": 786,
"guide_size_for": true,
"max_size": 1024,
"seed": 0,
"steps": 30,
"cfg": 8,
"sampler_name": "euler_ancestral",
"scheduler": "karras",
"denoise": 0.5,
"feather": 5,
"noise_mask": true,
"force_inpaint": false,
"bbox_threshold": 0.5,
"bbox_dilation": 10,
"bbox_crop_factor": 3,
"sam_detection_hint": "center-1",
"sam_dilation": 0,
"sam_threshold": 0.93,
"sam_bbox_expansion": 0,
"sam_mask_hint_threshold": 0.7,
"sam_mask_hint_use_negative": "False",
"drop_size": 10,
"refiner_ratio": 0.2,
"cycle": 1,
"inpaint_model": false,
"noise_mask_feather": 10,
"image": [
"46",
0
],
"detailer_pipe": [
"108",
0
]
},
"class_type": "FaceDetailerPipe",
"_meta": {
"title": "FaceDetailer (pipe)"
}
},
"113": {
"inputs": {
"images": [
"112",
2
]
},
"class_type": "PreviewImage",
"_meta": {
"title": "Preview Image"
}
},
"114": {
"inputs": {
"images": [
"112",
1
]
},
"class_type": "PreviewImage",
"_meta": {
"title": "Preview Image"
}
}
}

Thank you so much!
W

instant-id not working

does instant-id not work, because I used the default instant-id workflow in the examples folder and it fails. other workflows with instant-id also keep failing.

Internal Server Error

====================================
Inputs uploaded to /tmp/inputs:
Input.MOV

Checking weights
✅ ZoeD_M12_N.pt
✅ control_v11f1p_sd15_depth.pth
✅ sd15_t2v_beta.ckpt
✅ lcm-lora-sdv1-5.safetensors
✅ vae-ft-mse-840000-ema-pruned.safetensors
✅ Realistic_Vision_V5.1.safetensors

Checking inputs

Running workflow
got prompt
Error handling request
Traceback (most recent call last):
File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/aiohttp/web_protocol.py", line 452, in _handle_request
resp = await request_handler(request)
File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/aiohttp/web_app.py", line 543, in handle
resp = await handler(request)
File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/aiohttp/web_middlewares.py", line 114, in impl
return await handler(request)
File "/src/ComfyUI/server.py", line 47, in cache_control
response: web.Response = await handler(request)
File "/src/ComfyUI/server.py", line 474, in post_prompt
valid = execution.validate_prompt(prompt)
File "/src/ComfyUI/execution.py", line 620, in validate_prompt
class
= nodes.NODE_CLASS_MAPPINGS[prompt[x]['class_type']]
KeyError: 'ADE_LoadAnimateDiffModel'
Traceback (most recent call last):
File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/cog/server/worker.py", line 217, in _predict
result = predict(**payload)
File "/src/predict.py", line 94, in predict
self.comfyUI.run_workflow(wf)
File "/src/helpers/comfyui.py", line 213, in run_workflow
prompt_id = self.queue_prompt(workflow)
File "/src/helpers/comfyui.py", line 150, in queue_prompt
output = json.loads(urllib.request.urlopen(req).read())
File "/root/.pyenv/versions/3.10.6/lib/python3.10/urllib/request.py", line 216, in urlopen
return opener.open(url, data, timeout)
File "/root/.pyenv/versions/3.10.6/lib/python3.10/urllib/request.py", line 525, in open
response = meth(req, response)
File "/root/.pyenv/versions/3.10.6/lib/python3.10/urllib/request.py", line 634, in http_response
response = self.parent.error(
File "/root/.pyenv/versions/3.10.6/lib/python3.10/urllib/request.py", line 563, in error
return self._call_chain(*args)
File "/root/.pyenv/versions/3.10.6/lib/python3.10/urllib/request.py", line 496, in _call_chain
result = func(*args)
File "/root/.pyenv/versions/3.10.6/lib/python3.10/urllib/request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 500: Internal Server Error

trying to run an LCM AnimateDiff workflow - Not sure what I am doing wrong here.

IpAdapter NOT working

Hello
IpAdapter had new breaking updated recently, sad but the upgrade is great.

I'm using public version because i don't know how to code.
And this is the fastest way to test things on 48 gb GPU for me.
So it would be cool to update all the custom nodes at least once once a week

Impact on Replicate Run time

Let's say I create 100 predictions using the Replicate API, how does it impact the run time cost?

Does the queue time in comfyui is counted toward the Replicate run time?

New wieghts & Lora

Hey @fofr !
First, thank you for creating this public model on Replicate :)

What is the process to request not Loras and weights?

Essential LORAs and EMBEDDINGs

Request for some additional custom node

Hello! Thanks for putting this together.

We're currently using a custom forked version of this image to implement this workflow: https://openart.ai/workflows/publicprompts/stylized-text/st72sLEJUMZtznCAKsiM. We were wondering if it would be possible / desired to add a few sets of custom nodes to this model so that we could use the base model directly and other folks in the community could easily use the same workflow.

The custom node sets currently not included that are required for the stylized-text workflow:

If you are OK with including those, I'd be happy to prepare a PR to add them. Let me know what you think.

Request for weights and nodes

Hi, Thanks a lot for putting this up it's a really great work :)

I would like to request for nodes and weights

Node

Weights

  • dreamlabsoil_V2_v2.safetensors
  • COOLKIDS_MERGE_V2.5.safetensors
  • IPAdapter_image_encoder_sd15.safetensors

Here's the workflow I'm trying to run

{
  "3": {
    "inputs": {
      "seed": 54429869184980,
      "steps": 30,
      "cfg": 5.5,
      "sampler_name": "dpmpp_2m",
      "scheduler": "karras",
      "denoise": 1,
      "model": [
        "41",
        0
      ],
      "positive": [
        "10",
        0
      ],
      "negative": [
        "7",
        0
      ],
      "latent_image": [
        "71",
        0
      ]
    },
    "class_type": "KSampler",
    "_meta": {
      "title": "KSampler"
    }
  },
  "4": {
    "inputs": {
      "ckpt_name": "dreamlabsoil_V2_v2.safetensors"
    },
    "class_type": "CheckpointLoaderSimple",
    "_meta": {
      "title": "Load Checkpoint"
    }
  },
  "6": {
    "inputs": {
      "text": "boy",
      "clip": [
        "13",
        1
      ]
    },
    "class_type": "CLIPTextEncode",
    "_meta": {
      "title": "Positive prompt"
    }
  },
  "7": {
    "inputs": {
      "text": "text, watermark, distorted",
      "clip": [
        "13",
        1
      ]
    },
    "class_type": "CLIPTextEncode",
    "_meta": {
      "title": "Negative prompt"
    }
  },
  "8": {
    "inputs": {
      "samples": [
        "3",
        0
      ],
      "vae": [
        "4",
        2
      ]
    },
    "class_type": "VAEDecode",
    "_meta": {
      "title": "VAE Decode"
    }
  },
  "10": {
    "inputs": {
      "strength": 1,
      "conditioning": [
        "6",
        0
      ],
      "control_net": [
        "11",
        0
      ],
      "image": [
        "37",
        0
      ]
    },
    "class_type": "ControlNetApply",
    "_meta": {
      "title": "Apply ControlNet"
    }
  },
  "11": {
    "inputs": {
      "control_net_name": "control_v11p_sd15_inpaint.pth"
    },
    "class_type": "ControlNetLoader",
    "_meta": {
      "title": "Load ControlNet Model"
    }
  },
  "13": {
    "inputs": {
      "lora_name": "ip-adapter-faceid-plus_sd15_lora.safetensors",
      "strength_model": 1,
      "strength_clip": 1,
      "model": [
        "18",
        0
      ],
      "clip": [
        "18",
        1
      ]
    },
    "class_type": "LoraLoader",
    "_meta": {
      "title": "Load LoRA"
    }
  },
  "18": {
    "inputs": {
      "lora_name": "COOLKIDS_MERGE_V2.5.safetensors",
      "strength_model": 1,
      "strength_clip": 1,
      "model": [
        "4",
        0
      ],
      "clip": [
        "4",
        1
      ]
    },
    "class_type": "LoraLoader",
    "_meta": {
      "title": "Load LoRA"
    }
  },
  "37": {
    "inputs": {
      "image": [
        "38",
        0
      ],
      "mask": [
        "111",
        0
      ]
    },
    "class_type": "InpaintPreprocessor",
    "_meta": {
      "title": "Inpaint Preprocessor"
    }
  },
  "38": {
    "inputs": {
      "image": "template.png",
      "upload": "image"
    },
    "class_type": "LoadImage",
    "_meta": {
      "title": "Input 1 (template image)"
    }
  },
  "41": {
    "inputs": {
      "weight": 1,
      "noise": 0,
      "weight_type": "original",
      "start_at": 0,
      "end_at": 1,
      "faceid_v2": false,
      "weight_v2": 1,
      "unfold_batch": false,
      "ipadapter": [
        "51",
        0
      ],
      "clip_vision": [
        "49",
        0
      ],
      "insightface": [
        "44",
        0
      ],
      "image": [
        "47",
        0
      ],
      "model": [
        "13",
        0
      ]
    },
    "class_type": "IPAdapterApplyFaceID",
    "_meta": {
      "title": "Apply IPAdapter FaceID"
    }
  },
  "44": {
    "inputs": {
      "provider": "CPU"
    },
    "class_type": "InsightFaceLoader",
    "_meta": {
      "title": "Load InsightFace"
    }
  },
  "47": {
    "inputs": {
      "image": "face.png",
      "upload": "image"
    },
    "class_type": "LoadImage",
    "_meta": {
      "title": "Input 3 (face image)"
    }
  },
  "49": {
    "inputs": {
      "clip_name": "IPAdapter_image_encoder_sd15.safetensors"
    },
    "class_type": "CLIPVisionLoader",
    "_meta": {
      "title": "Load CLIP Vision"
    }
  },
  "51": {
    "inputs": {
      "ipadapter_file": "ip-adapter-faceid-plus_sd15.bin"
    },
    "class_type": "IPAdapterModelLoader",
    "_meta": {
      "title": "Load IPAdapter Model"
    }
  },
  "68": {
    "inputs": {
      "pixels": [
        "38",
        0
      ],
      "vae": [
        "4",
        2
      ]
    },
    "class_type": "VAEEncode",
    "_meta": {
      "title": "VAE Encode"
    }
  },
  "71": {
    "inputs": {
      "samples": [
        "68",
        0
      ],
      "mask": [
        "111",
        0
      ]
    },
    "class_type": "SetLatentNoiseMask",
    "_meta": {
      "title": "Set Latent Noise Mask"
    }
  },
  "74": {
    "inputs": {
      "seed": 862101002343922,
      "steps": 30,
      "cfg": 5.5,
      "sampler_name": "dpmpp_2m",
      "scheduler": "karras",
      "denoise": 0.35000000000000003,
      "model": [
        "41",
        0
      ],
      "positive": [
        "93",
        0
      ],
      "negative": [
        "7",
        0
      ],
      "latent_image": [
        "123",
        0
      ]
    },
    "class_type": "KSampler",
    "_meta": {
      "title": "KSampler"
    }
  },
  "75": {
    "inputs": {
      "pixels": [
        "8",
        0
      ],
      "vae": [
        "4",
        2
      ]
    },
    "class_type": "VAEEncode",
    "_meta": {
      "title": "VAE Encode"
    }
  },
  "76": {
    "inputs": {
      "samples": [
        "74",
        0
      ],
      "vae": [
        "4",
        2
      ]
    },
    "class_type": "VAEDecode",
    "_meta": {
      "title": "VAE Decode"
    }
  },
  "77": {
    "inputs": {
      "images": [
        "76",
        0
      ]
    },
    "class_type": "PreviewImage",
    "_meta": {
      "title": "Preview output"
    }
  },
  "93": {
    "inputs": {
      "strength": 1,
      "conditioning": [
        "104",
        0
      ],
      "control_net": [
        "97",
        0
      ],
      "image": [
        "94",
        0
      ]
    },
    "class_type": "ControlNetApply",
    "_meta": {
      "title": "Apply ControlNet"
    }
  },
  "94": {
    "inputs": {
      "detect_hand": "disable",
      "detect_body": "disable",
      "detect_face": "enable",
      "resolution": 512,
      "image": [
        "8",
        0
      ]
    },
    "class_type": "OpenposePreprocessor",
    "_meta": {
      "title": "OpenPose Pose"
    }
  },
  "96": {
    "inputs": {
      "images": [
        "94",
        0
      ]
    },
    "class_type": "PreviewImage",
    "_meta": {
      "title": "Preview openpose"
    }
  },
  "97": {
    "inputs": {
      "control_net_name": "control_v11p_sd15_openpose.pth"
    },
    "class_type": "ControlNetLoader",
    "_meta": {
      "title": "Load ControlNet Model"
    }
  },
  "104": {
    "inputs": {
      "text": "face, white shirt",
      "clip": [
        "13",
        1
      ]
    },
    "class_type": "CLIPTextEncode",
    "_meta": {
      "title": "Positive prompt"
    }
  },
  "109": {
    "inputs": {
      "image": "mask.png",
      "channel": "red",
      "upload": "image"
    },
    "class_type": "LoadImageMask",
    "_meta": {
      "title": "Input 2 (template mask)"
    }
  },
  "110": {
    "inputs": {
      "iterations": 16,
      "masks": [
        "109",
        0
      ]
    },
    "class_type": "Mask Dilate Region",
    "_meta": {
      "title": "Mask Dilate Region"
    }
  },
  "111": {
    "inputs": {
      "radius": 8,
      "masks": [
        "110",
        0
      ]
    },
    "class_type": "Mask Gaussian Region",
    "_meta": {
      "title": "Mask Gaussian Region"
    }
  },
  "112": {
    "inputs": {
      "mask": [
        "111",
        0
      ]
    },
    "class_type": "MaskToImage",
    "_meta": {
      "title": "Convert Mask to Image"
    }
  },
  "113": {
    "inputs": {
      "blend_percentage": 0.5,
      "image_a": [
        "38",
        0
      ],
      "image_b": [
        "112",
        0
      ]
    },
    "class_type": "Image Blend",
    "_meta": {
      "title": "Image Blend"
    }
  },
  "120": {
    "inputs": {
      "images": [
        "113",
        0
      ]
    },
    "class_type": "PreviewImage",
    "_meta": {
      "title": "Preview image-mask blend"
    }
  },
  "122": {
    "inputs": {
      "images": [
        "8",
        0
      ]
    },
    "class_type": "PreviewImage",
    "_meta": {
      "title": "Preview output 1"
    }
  },
  "123": {
    "inputs": {
      "samples": [
        "75",
        0
      ],
      "mask": [
        "111",
        0
      ]
    },
    "class_type": "SetLatentNoiseMask",
    "_meta": {
      "title": "Set Latent Noise Mask"
    }
  },
  "127": {
    "inputs": {
      "blend_percentage": 0.5,
      "image_a": [
        "8",
        0
      ],
      "image_b": [
        "112",
        0
      ]
    },
    "class_type": "Image Blend",
    "_meta": {
      "title": "Image Blend"
    }
  },
  "128": {
    "inputs": {
      "images": [
        "127",
        0
      ]
    },
    "class_type": "PreviewImage",
    "_meta": {
      "title": "Preview image-mask blend"
    }
  },
  "132": {
    "inputs": {
      "filename_prefix": "ComfyUI",
      "images": [
        "76",
        0
      ]
    },
    "class_type": "SaveImage",
    "_meta": {
      "title": "Output save image"
    }
  }
}

Thanks a lot :)

immediate response / async processing

Usually on replicate I get an immediate response containing a get url that I can poll for prediction status, which allows me to return fast without worrying about users navving away etc.

Is something like that possible with a comfyui workflow? I was surprised to see that the replicate nodejs client waits around for the workflow to fully complete, and afaict there's no simple way for me to persist something (like a prediction ID) that I could retrieve and check later if the client went away.

Here's the code for my dummy nextjs api route, based on the example

import Replicate from "replicate";

import { NextApiRequest, NextApiResponse } from "next";

export default async function handler(
  req: NextApiRequest,
  res: NextApiResponse
) {
  const replicate = new Replicate({
    auth: process.env.REPLICATE_API_TOKEN,
  });

  const workflow = {
    "3": {
      "inputs": {
        "seed": 156680208700286,
        "steps": 20,
        "cfg": 8,
        "sampler_name": "euler",
        "scheduler": "normal",
        "denoise": 1,
        "model": [
          "4",
          0
        ],
        "positive": [
          "6",
          0
        ],
        "negative": [
          "7",
          0
        ],
        "latent_image": [
          "5",
          0
        ]
      },
      "class_type": "KSampler",
      "_meta": {
        "title": "KSampler"
      }
    },
    "4": {
      "inputs": {
        "ckpt_name": "dreamshaperXL_alpha2Xl10.safetensors"
      },
      "class_type": "CheckpointLoaderSimple",
      "_meta": {
        "title": "Load Checkpoint"
      }
    },
    "5": {
      "inputs": {
        "width": 1024,
        "height": 1024,
        "batch_size": 4
      },
      "class_type": "EmptyLatentImage",
      "_meta": {
        "title": "Empty Latent Image"
      }
    },
    "6": {
      "inputs": {
        "text": "beautiful scenery nature glass bottle landscape, purple galaxy bottle,",
        "clip": [
          "4",
          1
        ]
      },
      "class_type": "CLIPTextEncode",
      "_meta": {
        "title": "CLIP Text Encode (Prompt)"
      }
    },
    "7": {
      "inputs": {
        "text": "text, watermark",
        "clip": [
          "4",
          1
        ]
      },
      "class_type": "CLIPTextEncode",
      "_meta": {
        "title": "CLIP Text Encode (Prompt)"
      }
    },
    "8": {
      "inputs": {
        "samples": [
          "3",
          0
        ],
        "vae": [
          "4",
          2
        ]
      },
      "class_type": "VAEDecode",
      "_meta": {
        "title": "VAE Decode"
      }
    },
    "9": {
      "inputs": {
        "filename_prefix": "ComfyUI",
        "images": [
          "8",
          0
        ]
      },
      "class_type": "SaveImage",
      "_meta": {
        "title": "Save Image"
      }
    }
  }
  
  const output = await replicate.run(
    "fofr/any-comfyui-workflow:f8bbe354839d762488971160872eac54dc1c9e61462e91743d360d1d640020c6",
    {
      input: {
        workflow_json: JSON.stringify(workflow, null, 2),
        randomise_seeds: true,
        return_temp_files: false,
      },
    }
  );
  console.log(output);
  res.json({ output: JSON.stringify(output) });
}

Error running UltimateSDUpscale

I am getting empty outputs when using UltimateSDUpscale. Here is the replicate logs:

Checking inputs
====================================
Checking weights
⏳ Downloading juggernautXL_v8Rundiffusion.safetensors to ComfyUI/models/checkpoints
⌛️ Downloaded juggernautXL_v8Rundiffusion.safetensors in 30.40s, size: 6776.19MB
✅ juggernautXL_v8Rundiffusion.safetensors
⏳ Downloading 4x_NMKD-Siax_200k.pth to ComfyUI/models/upscale_models
⌛️ Downloaded 4x_NMKD-Siax_200k.pth in 0.03s, size: 0.05MB
✅ 4x_NMKD-Siax_200k.pth
====================================
Randomising seed to 150777435
Randomising seed to 1634076115
Running workflow
got prompt
Executing node 2, title: Load Checkpoint, class type: CheckpointLoaderSimple
model_type EPS
adm 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
Executing node 4, title: positive prompt, class type: CLIPTextEncode
Requested to load SDXLClipModel
Loading 1 new model
Executing node 5, title: negative prompt, class type: CLIPTextEncode
Executing node 6, title: Empty Latent Image, class type: EmptyLatentImage
Executing node 1, title: KSampler, class type: KSampler
Requested to load SDXL
Loading 1 new model
  0%|          | 0/18 [00:00<?, ?it/s]
  6%|▌         | 1/18 [00:00<00:14,  1.16it/s]
 11%|█         | 2/18 [00:01<00:09,  1.65it/s]
 17%|█▋        | 3/18 [00:01<00:07,  1.90it/s]
 22%|██▏       | 4/18 [00:02<00:06,  2.06it/s]
 28%|██▊       | 5/18 [00:02<00:06,  2.15it/s]
 33%|███▎      | 6/18 [00:02<00:05,  2.21it/s]
 39%|███▉      | 7/18 [00:03<00:04,  2.25it/s]
 44%|████▍     | 8/18 [00:03<00:04,  2.26it/s]
 50%|█████     | 9/18 [00:04<00:03,  2.28it/s]
 56%|█████▌    | 10/18 [00:04<00:03,  2.30it/s]
 61%|██████    | 11/18 [00:05<00:03,  2.33it/s]
 67%|██████▋   | 12/18 [00:05<00:02,  2.34it/s]
 72%|███████▏  | 13/18 [00:06<00:02,  2.32it/s]
 78%|███████▊  | 14/18 [00:06<00:01,  2.35it/s]
 83%|████████▎ | 15/18 [00:06<00:01,  2.40it/s]
 89%|████████▉ | 16/18 [00:07<00:00,  2.43it/s]
 94%|█████████▍| 17/18 [00:07<00:00,  2.48it/s]
100%|██████████| 18/18 [00:07<00:00,  2.97it/s]
100%|██████████| 18/18 [00:07<00:00,  2.31it/s]
Requested to load AutoencoderKL
Loading 1 new model
Executing node 7, title: VAE Decode, class type: VAEDecode
Executing node 25, title: Load Upscale Model, class type: UpscaleModelLoader
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "/src/ComfyUI/execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/src/ComfyUI/execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/src/ComfyUI/execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/src/ComfyUI/comfy_extras/nodes_upscale_model.py", line 20, in load_model
sd = comfy.utils.load_torch_file(model_path, safe_load=True)
File "/src/ComfyUI/custom_nodes/ComfyUI-PhotoMaker/.patches.py", line 12, in load_torch_file
pl_sd = torch.load(ckpt, map_location=device, weights_only=True)
File "/root/.pyenv/versions/3.10.6/lib/python3.10/site-packages/torch/serialization.py", line 1039, in load
raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) from None
_pickle.UnpicklingError: Weights only load failed. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution.Do it only if you get the file from a trusted source. WeightsUnpickler error: Unsupported operand 60
Prompt executed in 12.80 seconds
outputs:  {}
====================================
Contents of /tmp/outputs:

And here is the workflow:

{"1":{"inputs":{"seed":315482209324445,"steps":18,"cfg":2,"sampler_name":"dpmpp_sde","scheduler":"karras","denoise":1,"model":["2",0],"positive":["4",0],"negative":["5",0],"latent_image":["6",0]},"class_type":"KSampler","_meta":{"title":"KSampler"}},"2":{"inputs":{"ckpt_name":"juggernautXL_v8Rundiffusion.safetensors"},"class_type":"CheckpointLoaderSimple","_meta":{"title":"Load Checkpoint"}},"4":{"inputs":{"text":"A beautiful portrait photograph of a dragon with dark black scales, opal eyes, smoke out of nose, cinematic, detailed scales, fantasy art, hyperdetailed photograph, matte dark scales, 8k resolution,","clip":["2",1]},"class_type":"CLIPTextEncode","_meta":{"title":"positive prompt"}},"5":{"inputs":{"text":"","clip":["2",1]},"class_type":"CLIPTextEncode","_meta":{"title":"negative prompt"}},"6":{"inputs":{"width":1216,"height":832,"batch_size":1},"class_type":"EmptyLatentImage","_meta":{"title":"Empty Latent Image"}},"7":{"inputs":{"samples":["1",0],"vae":["2",2]},"class_type":"VAEDecode","_meta":{"title":"VAE Decode"}},"25":{"inputs":{"model_name":"4x_NMKD-Siax_200k.pth"},"class_type":"UpscaleModelLoader","_meta":{"title":"Load Upscale Model"}},"27":{"inputs":{"upscale_by":1.5,"seed":-1,"steps":15,"cfg":2,"sampler_name":"dpmpp_sde","scheduler":"karras","denoise":0.3,"mode_type":"Linear","tile_width":1216,"tile_height":832,"mask_blur":8,"tile_padding":32,"seam_fix_mode":"None","seam_fix_denoise":1,"seam_fix_width":64,"seam_fix_mask_blur":8,"seam_fix_padding":16,"force_uniform_tiles":true,"tiled_decode":false,"image":["7",0],"model":["2",0],"positive":["4",0],"negative":["5",0],"vae":["2",2],"upscale_model":["25",0]},"class_type":"UltimateSDUpscale","_meta":{"title":"Ultimate SD Upscale"}},"28":{"inputs":{"filename_prefix":"createit","images":["27",0]},"class_type":"SaveImage","_meta":{"title":"Save Image"}}}

Any idea what is going wrong?
Thanks

ComfyUI-Impact-Pack is missing dependencies

Issue Report

Description

When running the script main.py in, the program encounters an unexpected behavior. I tried to install using pip install -r requirements.txt inside ComfyUI-Impact-Pack folder and also Python install.py but still the same error. I also ran ./scripts/clone_plugins.sh after cloning the repo

Steps to Reproduce

  1. Run the script main.py with the command python main.py --listen 0.0.0.0.

Expected Behavior

The script should run without any issues and provide the expected output.

Actual Behavior

The script encounters an issue during execution.

Environment

  • Operating System: Linux (Ubuntu)
  • Shell: zsh 5.9
  • Python Version: 2.2.1+cu121
  • GPU: NVIDIA GeForce RTX 3080
Running 'bash' in Docker with the current directory mounted as a volume...
root@c484976c6fd1:/src# cd ComfyUI/
root@c484976c6fd1:/src/ComfyUI# python main.py --listen 0.0.0.0
Total VRAM 9976 MB, total RAM 31820 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3080 : cudaMallocAsync
VAE dtype: torch.bfloat16
Using pytorch cross attention
[tinyterraNodes] Loaded
### Loading: ComfyUI-Inspire-Pack (V0.59)
------------------------------------------
Comfyroll Studio v1.76 :  175 Nodes Loaded
------------------------------------------
** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md
** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki
------------------------------------------
Total VRAM 9976 MB, total RAM 31820 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3080 : cudaMallocAsync
VAE dtype: torch.bfloat16
[ReActor] - STATUS - Running v0.4.1-b9 in ComfyUI
Torch version: 2.2.1+cu121
FizzleDorf Custom Nodes: Loaded
### Loading: ComfyUI-Impact-Pack (V4.76.1)
### ComfyUI-Impact-Pack is missing dependencies

LCM model for AnimateDiff + More LCM weights

AnimateDiff models are updated with lightning models, but the AnimateDiff LCM model is missing.
https://civitai.com/models/326698/animatediff-lcm-motion-model

Suggestion for 1.5 LCM models that work well with AnimateDiff:
https://civitai.com/models/232228?modelVersionId=262126
https://civitai.com/models/306814/photon-lcm
https://civitai.com/models/337966/delusions-lcm

Most of my AnimateDiff workflows use only LCM workflow as its fast and better quality than lighting.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.