Giter VIP home page Giter VIP logo

sd-webui-animatediff's Introduction

AnimateDiff for Stable Diffusion WebUI

I have recently added a non-commercial license to this extension. If you want to use this extension for commercial purpose, please contact me via email.

This extension aim for integrating AnimateDiff with CLI into AUTOMATIC1111 Stable Diffusion WebUI with ControlNet, and form the most easy-to-use AI video toolkit. You can generate GIFs in exactly the same way as generating images after enabling this extension.

This extension implements AnimateDiff in a different way. It inserts motion modules into UNet at runtime, so that you do not need to reload your model weights if you don't want to.

You might also be interested in another extension I created: Segment Anything for Stable Diffusion WebUI, which could be quite useful for inpainting.

Forge users should either checkout branch forge/master in this repository or use sd-forge-animatediff. They will be in sync.

Table of Contents

Update | Future Plan | Model Zoo | Documentation | Tutorial | Thanks | Star History | Sponsor

Update

  • v2.0.0-a in 03/02/2024: The whole extension has been reworked to make it easier to maintain.
    • Prerequisite: WebUI >= 1.8.0 & ControlNet >=1.1.441 & PyTorch >= 2.0.0
    • New feature:
      • ControlNet inpaint / IP-Adapter prompt travel / SparseCtrl / ControlNet keyframe, see ControlNet V2V
      • FreeInit, see FreeInit
    • Minor: mm filter based on sd version (click refresh button if you switch between SD1.5 and SDXL) / display extension version in infotext
    • Breaking change: You must use Motion LoRA, Hotshot-XL, AnimateDiff V3 Motion Adapter from my huggingface repo.
  • v2.0.1-a in 07/12/2024: Support AnimateLCM from MMLab@CUHK. See here for instruction.

Future Plan

Although OpenAI Sora is far better at following complex text prompts and generating complex scenes, we believe that OpenAI will NOT open source Sora or any other other products they released recently. My current plan is to continue developing this extension until when an open-sourced video model is released, with strong ability to generate complex scenes, easy customization and good ecosystem like SD1.5.

We will try our best to bring interesting researches into both WebUI and Forge as long as we can. Not all researches will be implemented. You are welcome to submit a feature request if you find an interesting one. We are also open to learn from other equivalent software.

That said, due to the notorious difficulty in maintaining sd-webui-controlnet, we do NOT plan to implement ANY new research into WebUI if it touches "reference control", such as Magic Animate. Such features will be Forge only. Also, some advanced features in ControlNet Forge Intergrated, such as ControlNet per-frame mask, will also be Forge only. I really hope that I could have bandwidth to rework sd-webui-controlnet, but it requires a huge amount of time.

Model Zoo

I am maintaining a huggingface repo to provide all official models in fp16 & safetensors format. You are highly recommended to use my link. You MUST use my link to download Motion LoRA, Hotshot-XL, AnimateDiff V3 Motion Adapter. You may still use the old links if you want, for all other models

Documentation

Tutorial

There are a lot of wonderful video tutorials on YouTube and bilibili, and you should check those out for now. For the time being, there are a series of updates on the way and I don't want to work on my own before I am satisfied. An official tutorial should come when I am satisfied with the available features.

Thanks

We thank all developers and community users who contribute to this repository in many ways, especially

Star History

Star History Chart

Sponsor

You can sponsor me via WeChat, AliPay or PayPal. You can also support me via ko-fi or afdian.

WeChat AliPay PayPal
216aff0250c7fd2bb32eeb4f7aae623 15fe95b4ada738acf3e44c1d45a1805 IMG_1419_

sd-webui-animatediff's People

Contributors

advtech92 avatar alexpinilla avatar asdfgh avatar clonephaze avatar continue-revolution avatar fluttyproger avatar hsyhhssyy avatar huchenlei avatar jeryzeng avatar kevmak avatar kohakublueleaf avatar light-and-ray avatar neversay avatar raziel619 avatar rbfussell avatar remixer-dec avatar rjkip avatar spensercai avatar thiswinex avatar wfjsw avatar yuchen1984 avatar zappityzap avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sd-webui-animatediff's Issues

memory leaks

Expected behavior

thanks for the amazing extension!

works from the box on nvidia 3060, after the last update it quietly generates gifs 512x768

there are suspicions that there are memory leaks, since after a couple of generations the automatic1111 crashes with a lack of memory

[Bug]: Incorrect FPS value calculation

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

Have to set FPS to say, 200 to get results consistent with everyone else. The default settings are laughably wrong.

See this reddit comment thread, I've tested the code fix and it works, so port over what the comments recommend: https://www.reddit.com/r/StableDiffusion/comments/152n2cr/comment/jsfuuva/?utm_source=share&utm_medium=web2x&context=3

Steps to reproduce the problem

Run extension with default settings. Gif will be ultra slow

What should have happened?

Gif should run at normal speed

Commit where the problem happens

Latest

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

Doesn't matter

Console logs

Doesn't matter

Additional information

No response

[Feature]: Support more than 24 frames

Expected behavior

Well, I modified the relevant parts of the UI code and the "max_len" arguments, but it appears that the actual model itself doesn't support more than 24 frames at a time. Is there any way to make this extension work for an (ideally unlimited) number of frames?

[Bug]: Installation not complete

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

I've installed the extension
I see AnimateDiff in settings, i have the option in the text2img and img2img panels
My Automatic instance didn't autodownload the motion modules - downloaded manually and put them in \extensions\sd-webui-animatediff\model

When I enable the AnimateDif, it just produces an image, no gif. No messages related to this extension in the console

Setup: W11 with xformers installed

Steps to reproduce the problem

Pretty much what I described
install from github link
download models

What should have happened?

animated gif output

Commit where the problem happens

webui: 1.4.1
extension:

What browsers do you use to access the UI ?

No response

Command Line Arguments

-

Console logs

nothing at all apart from typical image generation progress

Additional information

No response

[Bug]:

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

image

  1. 负面提示词加入Embedding 就会照成生成的动画分成2段 不连续的GIF

Steps to reproduce the problem

  1. 负面提示词加入Embedding 就会照成生成的动画分成2段 不连续的GIF

image

What should have happened?

  1. 负面提示词加入Embedding 就会照成生成的动画分成2段 不连续的GIF,应该生成相关联的GIF

Commit where the problem happens

webui: 1.5.1
extension: 1.2.1

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

none

Console logs

none

Additional information

No response

[Feature]: OpenPose Control

Expected behavior

is it possible to use this alongside Controlnet Pose for generating specific pose for characters in videos?

Grey issue fix = Crank that CFG (workaround)

I found that if you really crank the cfg (i'm talking, 17-25), the greyout issue diminishes, but the image DOESN'T burn to a crisp like they would if you just did a normal txt2img with that high a cfg.

I'm sure there's going to be a more elegant solution, but it works.

Thanks for the extension, i couldn't get the main project working. :D

[Bug]: Doesn't do anything

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

It doesn't do anything, I mean it outputs GIF but it's only 1 frame.
Seems no errors are displayed. All other plugins work correctly, I set Video frame number to 16, Frames per second 8

Steps to reproduce the problem

  1. Go to UI
  2. Enable AnimateDiff
  3. Press Generate

What should have happened?

Animated GIF

Commit where the problem happens

version: [v1.4.0] •  python: 3.10.7  •  torch: 2.0.0+cu118  •  xformers: 0.0.20  •  gradio: 3.32.0

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--listen --port 7861  --skip-version-check --skip-torch-cuda-test --skip-python-version-check --opt-split-attention-v1 --xformers --no-half-vae --api --vae-path .\\payload\\vae-ft-mse-840000-ema-pruned.ckpt --enable-insecure-extension-access

*I tried removing these args it didn't help.

Console logs

2023-07-18 17:25:06,318 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet input blocks.01<00:00, 17.91it/s]
2023-07-18 17:25:06,318 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet output blocks.
2023-07-18 17:25:06,318 - AnimateDiff - INFO - Removal finished.
2023-07-18 17:25:06,319 - AnimateDiff - INFO - Merging images into GIF.
2023-07-18 17:25:06,380 - AnimateDiff - INFO - AnimateDiff process end.

Additional information

Windows 10, RTX 3090

[Bug]:

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

The extension just generates 16 (the frame count) separate images that do not resemble each other, and then stitches them into a gif. I don't believe the motion module is used at all. Afterwards, it completely breaks the webUI, and this error occurs for any generation afterwards:

Traceback (most recent call last):
File "B:\AiGen\stable-diffusion-webui\modules\call_queue.py", line 55, in f
res = list(func(*args, **kwargs))
File "B:\AiGen\stable-diffusion-webui\modules\call_queue.py", line 35, in f
res = func(*args, **kwargs)
File "B:\AiGen\stable-diffusion-webui\modules\txt2img.py", line 57, in txt2img
processed = processing.process_images(p)
File "B:\AiGen\stable-diffusion-webui\modules\processing.py", line 620, in process_images
res = process_images_inner(p)
File "B:\AiGen\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "B:\AiGen\stable-diffusion-webui\modules\processing.py", line 739, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "B:\AiGen\stable-diffusion-webui\modules\processing.py", line 992, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "B:\AiGen\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 222, in sample
samples_ddim = self.launch_sampling(steps, lambda: self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta)[0])
File "B:\AiGen\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 51, in launch_sampling
return func()
File "B:\AiGen\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 222, in
samples_ddim = self.launch_sampling(steps, lambda: self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta)[0])
File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 104, in sample
samples, intermediates = self.ddim_sampling(conditioning, size,
File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 164, in ddim_sampling
outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
File "B:\AiGen\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 58, in p_sample_ddim_hook
res = self.orig_p_sample_ddim(x_dec, cond, ts, *args, unconditional_conditioning=unconditional_conditioning, **kwargs)
File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 212, in p_sample_ddim
model_uncond, model_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
File "B:\AiGen\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "B:\AiGen\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "B:\AiGen\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
h = module(h, emb, context)
File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "B:\AiGen\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 19, in mm_tes_forward
x = layer(x, emb)
File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 249, in forward
return checkpoint(
File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 262, in _forward
h = self.in_layers(x)
File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
input = module(input)
File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "B:\AiGen\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "B:\AiGen\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 226, in forward
return super().forward(x.float()).type(x.dtype)
File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
return F.group_norm(
File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [2560] and input of shape [32, 5120, 12, 8]

Steps to reproduce the problem

  1. Go to txt2img
  2. Enable Animatediff
  3. Press generate
  4. Does as above says

What should have happened?

It should have generated a gif using the motion module instead of 16 images that reflect my prompt, and then prevents any more use of my webui until a restart.

Commit where the problem happens

webui: f865d3e11647dfd6c7b2cdf90dde24680e58acd8
extension: 88a04c3

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--deepdanbooru --no-half-vae --xformers

Note*: Attempted without xformers as well.

Console logs

Traceback (most recent call last):
      File "B:\AiGen\stable-diffusion-webui\modules\call_queue.py", line 55, in f
        res = list(func(*args, **kwargs))
      File "B:\AiGen\stable-diffusion-webui\modules\call_queue.py", line 35, in f
        res = func(*args, **kwargs)
      File "B:\AiGen\stable-diffusion-webui\modules\txt2img.py", line 57, in txt2img
        processed = processing.process_images(p)
      File "B:\AiGen\stable-diffusion-webui\modules\processing.py", line 620, in process_images
        res = process_images_inner(p)
      File "B:\AiGen\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "B:\AiGen\stable-diffusion-webui\modules\processing.py", line 739, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "B:\AiGen\stable-diffusion-webui\modules\processing.py", line 992, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "B:\AiGen\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 222, in sample
        samples_ddim = self.launch_sampling(steps, lambda: self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta)[0])
      File "B:\AiGen\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 51, in launch_sampling
        return func()
      File "B:\AiGen\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 222, in <lambda>
        samples_ddim = self.launch_sampling(steps, lambda: self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta)[0])
      File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 104, in sample
        samples, intermediates = self.ddim_sampling(conditioning, size,
      File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 164, in ddim_sampling
        outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
      File "B:\AiGen\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 58, in p_sample_ddim_hook
        res = self.orig_p_sample_ddim(x_dec, cond, ts, *args, unconditional_conditioning=unconditional_conditioning, **kwargs)
      File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 212, in p_sample_ddim
        model_uncond, model_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
      File "B:\AiGen\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "B:\AiGen\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "B:\AiGen\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
        return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
      File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
        h = module(h, emb, context)
      File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "B:\AiGen\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 19, in mm_tes_forward
        x = layer(x, emb)
      File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 249, in forward
        return checkpoint(
      File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 262, in _forward
        h = self.in_layers(x)
      File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 217, in forward
        input = module(input)
      File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "B:\AiGen\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "B:\AiGen\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "B:\AiGen\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 226, in forward
        return super().forward(x.float()).type(x.dtype)
      File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
        return F.group_norm(
      File "B:\AiGen\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
        return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
    RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [2560] and input of shape [32, 5120, 12, 8]

---

Additional information

No response

[Bug]: AttributeError: 'str' object has no attribute 'height'

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

Instead of generating a moving image
AttributeError: 'str' object has no attribute 'height'

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

What should have happened?

Generate moving pictures, no errors reported.

Commit where the problem happens

webui:
extension: animatediff

What browsers do you use to access the UI ?

No response

Command Line Arguments

no

Console logs

2023-07-30 16:10:55,846 - shared.py [line:200] - INFO: Starting job task(n9qtofso90h7tar)
2023-07-30 16:10:55,846 - AnimateDiff - INFO - AnimateDiff process start with video Max frames 16, FPS 8, duration 2.0,  motion module mm_sd_v15.ckpt.
2023-07-30 16:10:55,849 - AnimateDiff - INFO - Hacking GroupNorm32 forward function.
2023-07-30 16:10:55,849 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet input blocks.
2023-07-30 16:10:55,849 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet output blocks.
2023-07-30 16:10:55,849 - AnimateDiff - INFO - Injection finished.
2023-07-30 16:10:55,851 - dynamic_prompting.py [line:509] - INFO: Prompt matrix will create 16 images in a total of 1 batches.
Miaoshouai boot assistant: Memory Released!
2023-07-30 16:11:25,342 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet input blocks.
2023-07-30 16:11:25,342 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet output blocks.
2023-07-30 16:11:25,343 - AnimateDiff - INFO - Restoring GroupNorm32 forward function.
2023-07-30 16:11:25,343 - AnimateDiff - INFO - Removal finished.
2023-07-30 16:11:25,343 - AnimateDiff - INFO - Merging images into GIF.
2023-07-30 16:11:27,255 - AnimateDiff - INFO - AnimateDiff process end.
*** Error completing request
*** Arguments: ('task(n9qtofso90h7tar)', '<lora:SMDS_2_standard_Lion-000018:0.65>, desert, smds, sculpture, art, no humans, realistic, abstract, solo, hole, tentacles, lamp, metal, metal, reflection,', '((((blurry)))), ((signature)), ((watermark)), ((((blur)))), wall, ((pedestal)), circle,', [], 20, 18, False, False, 1, 1, 7, 8901234568.0, -1.0, 0, 0, 0, False, 512, 768, False, 0.35, 2, '4x-UltraSharp', 20, 0, 0, 0, '', '', [], <gradio.routes.Request object at 0x000001FDAAA531F0>, 0, '<span>(No stats yet, run benchmark in VRAM Estimator tab)</span>', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 1536, 96, True, True, True, False, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', True, 0, 16, 8, 'mm_sd_v15.ckpt', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001FDABACB610>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001FDABAC9D50>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001FDABACB1F0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001FDABACA140>, False, '', 0.5, True, False, '', 'Lerp', False, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5\n身体\nBODY:1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,1,1\nBODY0.5:1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,1,1\n脸部(脸型、发型、眼型、瞳色等)\nFACE:1,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0\nFACE0.5:1,0,0,0,0,0,0,0,0.8,1,1,0.2,0,0,0,0,0\nFACE0.2:1,0,0,0,0,0,0,0,0.2,0.6,0.8,0.2,0,0,0,0,0\n修手专用\nHAND:1,0,1,1,0.2,0,0,0,0,0,0,0,0,0,0,0,0\n服装(搭配tag使用)\nCLOTHING:1,1,1,1,1,0,0.2,0,0.8,1,1,0.2,0,0,0,0,0\n动作(搭配tag使用)\nPOSE:1,0,0,0,0,0,0.2,1,1,1,0,0,0,0,0,0,0\n上色风格(搭配tag使用)\nPALETTE:1,0,0,0,0,0,0,0,0,0,0,0.8,1,1,1,1,1\n角色(去风格化)\nKEEPCHAR:1,1,1,1,1,0,0,0,1,1,1,1,1,1,1,0,0\n背景(去风格化)\nKEEPBG:1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,0,0\n减弱过拟合(等同于OUTALL)\nREDUCEFIT:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False, False, 'Matrix', 'Horizontal', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', False, '0', '0', '0.4', None, None, False, '0', 'D:\\AIGC\\sd-webui-aki-v4.2\\models\\roop\\inswapper_128.onnx', 'CodeFormer', 1, '', 1, 1, False, True, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, '🔄', 0.9, 5, '0.0001', False, 'None', '', 0.1, False, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, 5, 'all', 'all', 'all', '', '', '', '1', 'none', False, '', '', 'comma', '', True, '', '20', 'all', 'all', 'all', 'all', None, None, False, None, None, False, None, None, False, None, None, False, 50, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5\n身体\nBODY:1,1,1,1,1,1,1,1,0,0,0,1,1,1,1,1,1\nBODY0.5:1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,1,1\n脸部(脸型、发型、眼型、瞳色等)\nFACE:1,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0\nFACE0.5:1,0,0,0,0,0,0,0,0.8,1,1,0.2,0,0,0,0,0\nFACE0.2:1,0,0,0,0,0,0,0,0.2,0.6,0.8,0.2,0,0,0,0,0\n修手专用\nHAND:1,0,1,1,0.2,0,0,0,0,0,0,0,0,0,0,0,0\n服装(搭配tag使用)\nCLOTHING:1,1,1,1,1,0,0.2,0,0.8,1,1,0.2,0,0,0,0,0\n动作(搭配tag使用)\nPOSE:1,0,0,0,0,0,0.2,1,1,1,0,0,0,0,0,0,0\n上色风格(搭配tag使用)\nPALETTE:1,0,0,0,0,0,0,0,0,0,0,0.8,1,1,1,1,1\n角色(去风格化)\nKEEPCHAR:1,1,1,1,1,0,0,0,1,1,1,1,1,1,1,0,0\n背景(去风格化)\nKEEPBG:1,1,1,1,1,1,0.2,1,0.2,0,0,0.8,1,1,1,0,0\n减弱过拟合(等同于OUTALL)\nREDUCEFIT:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False) {}
    Traceback (most recent call last):
      File "D:\AIGC\sd-webui-aki-v4.2\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "D:\AIGC\sd-webui-aki-v4.2\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "D:\AIGC\sd-webui-aki-v4.2\modules\txt2img.py", line 62, in txt2img
        processed = processing.process_images(p)
      File "D:\AIGC\sd-webui-aki-v4.2\extensions\sd-webui-prompt-history\lib_history\image_process_hijacker.py", line 25, in process_images
        global_state.add_config( uuid.uuid4().hex, res.prompt[:64], shared.opts.sd_model_checkpoint, res.infotexts[0], res.images[0])
      File "D:\AIGC\sd-webui-aki-v4.2\extensions\sd-webui-prompt-history\scripts\prompt_history_script.py", line 76, in add_config
        new_height = int(new_width * img.height / img.width)
    AttributeError: 'str' object has no attribute 'height'
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

---

Additional information

No response

[Bug]: generates only two different image in a gif , no animation

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

theres no errors I think..except this warning:

2023-08-03 14:29:26,522 - AnimateDiff - WARNING - Missing keys
2023-08-03 14:29:27,045 - AnimateDiff - INFO - Hacking GroupNorm32 forward function.
2023-08-03 14:29:27,045 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet input blocks.
2023-08-03 14:29:27,046 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet output blocks.
2023-08-03 14:29:27,046 - AnimateDiff - INFO - Injection finished.
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [03:02<00:00, 9.15s/it]
2023-08-03 14:32:47,567 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet input blocks.51<00:00, 9.00s/it]
2023-08-03 14:32:47,568 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet output blocks.
2023-08-03 14:32:47,568 - AnimateDiff - INFO - Restoring GroupNorm32 forward function.
2023-08-03 14:32:47,569 - AnimateDiff - INFO - Removal finished.
2023-08-03 14:32:47,569 - AnimateDiff - INFO - Merging images into GIF.
2023-08-03 14:32:49,315 - AnimateDiff - INFO - AnimateDiff process end.

but when the process ends it ends with two different still gifs in the results are in the link , i've tried two times closed and reopone web-ui, nothing

link to the gifs: https://imgur.com/a/cZJd4Ll

prompt used in the second image:

masterpiece, best quality, 1girl, solo, cherry blossoms, hanami, pink flower, white flower, spring season, wisteria, petals, flower, plum blossoms, outdoors, falling petals, white hair, black eyes,
Negative prompt: canvas frame, (high contrast:1.2), (over saturated:1.2), (glossy:1.1), cartoon, 3d, ((disfigured)), ((bad art)), ((b&w)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), Photoshop, video game, ugly, tiling, poorly drawn hands, 3d render
Steps: 10, Sampler: Euler a, CFG scale: 7, Seed: 3156615935, Size: 512x512, Model hash: f0407eaf51, Model: colorful_v21, Version: 1.5.1

another details if needed:

python: 3.10.6  •  torch: 2.0.0+cu118  •  xformers: N/A  •  gradio: 3.32.0  •  checkpoint: [f0407eaf51]

Steps to reproduce the problem

masterpiece, best quality, 1girl, solo, cherry blossoms, hanami, pink flower, white flower, spring season, wisteria, petals, flower, plum blossoms, outdoors, falling petals, white hair, black eyes,
Negative prompt: canvas frame, (high contrast:1.2), (over saturated:1.2), (glossy:1.1), cartoon, 3d, ((disfigured)), ((bad art)), ((b&w)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), Photoshop, video game, ugly, tiling, poorly drawn hands, 3d render
Steps: 10, Sampler: Euler a, CFG scale: 7, Seed: 3156615935, Size: 512x512, Model hash: f0407eaf51, Model: colorful_v21, Version: 1.5.1

What should have happened?

I have no ideia, the model are ok is 1.5, theres no errors but its not generating a animation but 2 still images in the end as a gif.

Commit where the problem happens

webui: 1.5.1
extension: 48fc19d1

What browsers do you use to access the UI ?

No response

Command Line Arguments

@echo off

set COMMANDLINE_ARGS= --skip-install  --opt-sdp-attention --opt-sdp-no-mem-attention --no-half-vae --medvram
set PYTHON=S:\\Python310\\python.exe
set GIT=
set VENV_DIR=
set --ui-debug-mode= true




call webui.bat

Console logs

2023-08-03 14:29:26,522 - AnimateDiff - WARNING - Missing keys <All keys matched successfully>
2023-08-03 14:29:27,045 - AnimateDiff - INFO - Hacking GroupNorm32 forward function.
2023-08-03 14:29:27,045 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet input blocks.
2023-08-03 14:29:27,046 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet output blocks.
2023-08-03 14:29:27,046 - AnimateDiff - INFO - Injection finished.
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [03:02<00:00,  9.15s/it]
2023-08-03 14:32:47,567 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet input blocks.51<00:00,  9.00s/it]
2023-08-03 14:32:47,568 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet output blocks.
2023-08-03 14:32:47,568 - AnimateDiff - INFO - Restoring GroupNorm32 forward function.
2023-08-03 14:32:47,569 - AnimateDiff - INFO - Removal finished.
2023-08-03 14:32:47,569 - AnimateDiff - INFO - Merging images into GIF.
2023-08-03 14:32:49,315 - AnimateDiff - INFO - AnimateDiff process end.
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [03:10<00:00,  9.51s/it]
2023-08-03 14:35:41,962 - AnimateDiff - INFO - AnimateDiff process start with video Max frames 16, FPS 8, duration 2.0,  motion module mm_sd_v15.ckpt.
2023-08-03 14:35:42,850 - AnimateDiff - INFO - Hacking GroupNorm32 forward function.
2023-08-03 14:35:42,851 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet input blocks.
2023-08-03 14:35:42,851 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet output blocks.
2023-08-03 14:35:42,851 - AnimateDiff - INFO - Injection finished.

Additional information

No response

[Bug]: GroupNorm norming wrong dim in vae

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

This might be the cause of the grey sample, will do some test days later

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

What should have happened?

Bright sample, same as the original repo

Commit where the problem happens

webui:
extension:

What browsers do you use to access the UI ?

No response

Command Line Arguments

-

Console logs

-

Additional information

No response

[Feature]: Doesn't work on img2img panel

When I use img2img with AnimateDiff check-box enabled, it seems the AnimateDiff module are not in effect.
Neither the log shows any thing about the AnimateDiff Module, nor the AnimateDiff directory appears.

documentation request

( sorry if this is the wrong place to ask - I didn't see any discord/forum + not sure if i should ask here or the original AnimateDiff repo).

  1. Is controlnet coming anytime soon
  2. Will longer videos be supported
  3. My videos always split into 2 separate animations: 00028-100
  4. Gif with 0 movement when using img2img ( I did try mm 1.4 like some suggested, worked for txt2img but not for img2img)
  5. Is there some prompting technique that's gonna create more movement than others

[Bug]: Shutterstock watermark everywhere since version 1.2.0, but absent in 1.1.0

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

Images generated with version 1.2.0 full of shutterstock watermarks, this literally never happened with version 1.1.0

Steps to reproduce the problem

Update to version 1.2.0
do literally anything

What should have happened?

Be like version 1.1.0 but with fixed bugs

Commit where the problem happens

webui: Latest
extension: 1.2.0

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

unimportant

Console logs

unimportant

Additional information

You claim you "fixed the injection" in the readme but if this is fixed than I want it broken again.

[Feature]: Custom motion model support

Expected behavior

I know there is limited support for it now, but it's inconvenient to have to rename files.

I wish they would just support it without having to rename it

also .safetensors support too

[Bug]: Every frame is completely different than the last

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

Every frame is completely different than the last
00004-3142338252

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

What should have happened?

It seems like every frame is a new seed, and then they're stitched together into a .gif.

Commit where the problem happens

webui:
extension:

What browsers do you use to access the UI ?

No response

Command Line Arguments

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS= --autolaunch --medvram --no-half-vae --no-half
call webui.bat

Console logs

Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Creating model from config: C:\Users\Tristan\Documents\GitHub\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Startup time: 9.6s (launcher: 2.2s, import torch: 2.8s, import gradio: 0.7s, setup paths: 0.7s, other imports: 0.7s, list SD models: 0.2s, load scripts: 1.3s, create ui: 0.6s, gradio launch: 0.4s).
Applying attention optimization: Doggettx... done.
Model loaded in 2.9s (load weights from disk: 0.8s, create model: 0.4s, apply weights to model: 1.2s, calculate empty prompt: 0.4s).
2023-07-27 12:45:08,482 - AnimateDiff - INFO - AnimateDiff process start with video Max frames 16, FPS 8, duration 2.0,  motion module mm_sd_v15.ckpt.
2023-07-27 12:45:08,482 - AnimateDiff - INFO - Loading motion module mm_sd_v15.ckpt from C:\Users\Tristan\Documents\GitHub\stable-diffusion-webui\extensions\sd-webui-animatediff\model\mm_sd_v15.ckpt
2023-07-27 12:45:11,593 - AnimateDiff - WARNING - Missing keys <All keys matched successfully>
2023-07-27 12:45:11,899 - AnimateDiff - INFO - Hacking GroupNorm32 forward function.
2023-07-27 12:45:11,899 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet input blocks.
2023-07-27 12:45:11,900 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet output blocks.
2023-07-27 12:45:11,900 - AnimateDiff - INFO - Injection finished.
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:38<00:00,  1.95s/it]
2023-07-27 12:45:56,907 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet input blocks.35<00:00,  1.86s/it]
2023-07-27 12:45:56,907 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet output blocks.
2023-07-27 12:45:56,907 - AnimateDiff - INFO - Restoring GroupNorm32 forward function.
2023-07-27 12:45:56,908 - AnimateDiff - INFO - Removal finished.
2023-07-27 12:45:56,908 - AnimateDiff - INFO - Merging images into GIF.
2023-07-27 12:45:57,986 - AnimateDiff - INFO - AnimateDiff process end.
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:42<00:00,  2.13s/it]
2023-07-27 12:46:12,265 - AnimateDiff - INFO - AnimateDiff process start with video Max frames 16, FPS 8, duration 2.0,  motion module mm_sd_v15.ckpt.
2023-07-27 12:46:12,926 - AnimateDiff - INFO - Hacking GroupNorm32 forward function.
2023-07-27 12:46:12,926 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet input blocks.
2023-07-27 12:46:12,927 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet output blocks.
2023-07-27 12:46:12,928 - AnimateDiff - INFO - Injection finished.
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:38<00:00,  1.92s/it]
2023-07-27 12:46:57,213 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet input blocks.35<00:00,  1.86s/it]
2023-07-27 12:46:57,213 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet output blocks.
2023-07-27 12:46:57,214 - AnimateDiff - INFO - Restoring GroupNorm32 forward function.
2023-07-27 12:46:57,214 - AnimateDiff - INFO - Removal finished.
2023-07-27 12:46:57,215 - AnimateDiff - INFO - Merging images into GIF.
2023-07-27 12:46:58,530 - AnimateDiff - INFO - AnimateDiff process end.
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:42<00:00,  2.12s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:42<00:00,  1.86s/it]

Additional information

No response

[Bug]: The first 8 frames and the last 8 frames of the generated 16 frames are very different.

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

The first 8 frames and the last 8 frames of the generated 16 frames are very different.

00020-3187489593

Steps to reproduce the problem

Snipaste_2023-07-20_11-35-22

What should have happened?

Commit where the problem happens

webui: v1.4.1-2-g7390dd0
extension: b574ca0

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

No

Console logs

N/A

Additional information

No response

[Bug]: 使用sd生成的gif不连贯,相当于2个不想关的1秒gif拼凑在一起

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

00012-2826701146
00013-1186515681

Steps to reproduce the problem

  1. Go to sd
  2. Press :prompt: a girl,hair fluttering in the wind,
    Negative prompt: bad-hands-5 bad-picture-chill-75v EasyNegative verybadimagenegative_v1.3 By bad artist -neg
    Steps: 40, Sampler: DPM++ 2M SDE Karras, CFG scale: 7, Seed: 1186515681, Size: 512x768, Model hash: 7f96a1a9ca, Model: AnythingV5_v5PrtRE, Clip skip: 2, Dynamic thresholding enabled: True, Mimic scale: 7, Separate Feature Channels: True, Scaling Startpoint: MEAN, Variability Measure: AD, Interpolate Phi: 1, Threshold percentile: 100, TI hashes: "bad-hands-5: aa7651be154c, bad-picture-chill-75v: 7d9cc5f549d7, EasyNegative: c74b4e810b03, verybadimagenegative_v1.3: d70463f87042, By bad artist -neg: 2d356134903e", Version: v1.5.1
  3. animatediff:mm_sd_v14.ckpt。总16帧每秒8帧

What should have happened?

应该生成动作连贯的2秒视频

Commit where the problem happens

webui:
version: v1.5.1  •  python: 3.10.11  •  torch: 2.0.1+cu118  •  xformers: 0.0.20  •  gradio: 3.32.0  •  checkpoint: 7f96a1a9ca

extension:
sd-webui-animatediff https://github.com/continue-revolution/sd-webui-animatediff.git master 767f2a3e Fri Jul 28 13:43:51 2023

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

Console logs

后台显示运行正常。
Calculating sha256 for E:\sd-webui-aki-v4.2\extensions\sd-webui-animatediff\model\mm_sd_v14.ckpt: aa7fd8a200a89031edd84487e2a757c5315460eca528fa70d4b3885c399bffd5
2023-08-04 08:15:44,640 - AnimateDiff - INFO - Loading motion module mm_sd_v14.ckpt from E:\sd-webui-aki-v4.2\extensions\sd-webui-animatediff\model\mm_sd_v14.ckpt
2023-08-04 08:15:50,247 - AnimateDiff - WARNING - Missing keys <All keys matched successfully>
2023-08-04 08:15:50,845 - AnimateDiff - INFO - Hacking GroupNorm32 forward function.
2023-08-04 08:15:50,845 - AnimateDiff - INFO - Injecting motion module mm_sd_v14.ckpt into SD1.5 UNet input blocks.
2023-08-04 08:15:50,845 - AnimateDiff - INFO - Injecting motion module mm_sd_v14.ckpt into SD1.5 UNet output blocks.
2023-08-04 08:15:50,846 - AnimateDiff - INFO - Injection finished.
2023-08-04 08:18:48,800 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet input blocks.
2023-08-04 08:18:48,801 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet output blocks.
2023-08-04 08:18:48,801 - AnimateDiff - INFO - Restoring GroupNorm32 forward function.
2023-08-04 08:18:48,801 - AnimateDiff - INFO - Removal finished.
2023-08-04 08:18:48,801 - AnimateDiff - INFO - Merging images into GIF.
2023-08-04 08:18:52,894 - AnimateDiff - INFO - AnimateDiff process end.

Additional information

[Bug]: CUDA Error

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

I tried to run the program with default settings and got
RuntimeError: CUDA error: invalid configuration argument CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Steps to reproduce the problem

  1. Launch webui1111
  2. Open the animatediff tab
  3. Click "activate"
  4. Type in "wolf" in the prompt positive
  5. Type in "badv4" in the prompt negative
  6. Hit generate
  7. RuntimeError: CUDA error: invalid configuration argument CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

What should have happened?

A gif

Commit where the problem happens

version: [v1.4.1] •  python: 3.10.6  •  torch: 2.0.0+cu118  •  xformers: 0.0.20  •  gradio: 3.32.0

What browsers do you use to access the UI ?

No response

Command Line Arguments

Chrome

Console logs

2023-07-18 18:09:29,711 - AnimateDiff - INFO - AnimateDiff process start with video length 16, FPS 8, motion module mm_sd_v14.ckpt.
2023-07-18 18:09:29,715 - AnimateDiff - INFO - Injecting motion module mm_sd_v14.ckpt into SD1.5 UNet input blocks.
2023-07-18 18:09:29,715 - AnimateDiff - INFO - Injecting motion module mm_sd_v14.ckpt into SD1.5 UNet output blocks.
2023-07-18 18:09:29,716 - AnimateDiff - INFO - Injection finished.
  0%|                                                                                           | 0/30 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(fzm2q2jw2arb9ge)', 'A character portrait of a ((male) (anthro) ([Wolf]))', '', ['Text Inver'], 30, 15, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.2, 1.5, 'lollypop', 30, 0, 0, 0, '', '', [], 0, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 32, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 32, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 16, 8, 'mm_sd_v14.ckpt', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000027593343760>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000027593342620>, '', None, ['artist', 'character', 'species', 'general'], '', 'Reset form', 'Generate', False, 1, 0.15, False, 'OUT', ['OUT'], 5, 0, 'Bilinear', False, 'Pooling Max', False, 'Lerp', '', '', False, False, None, True, 1, False, 1, 0, '', '', 20, True, 20, True, 4, 0.4, 7, 512, 512, True, 88, False, 'None', False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', False, 'None', '', '', '', False, False, False, False, False, 50, '0', False, '', False, False, False, '', False,   1 2 3
*** 0      , False, 512, 512, 0.2, False, '', False, '', '', '', '', None, None, False, None, None, False, 50, True, 0, '', '', 20, True, 20, True, 4, 0.4, 7, 512, 512, True, 88, False, 'None') {}
    Traceback (most recent call last):
      File "H:\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 55, in f
        res = list(func(*args, **kwargs))
      File "H:\Stable Diffusion\stable-diffusion-webui\modules\call_queue.py", line 35, in f
        res = func(*args, **kwargs)
      File "H:\Stable Diffusion\stable-diffusion-webui\modules\txt2img.py", line 57, in txt2img
        processed = processing.process_images(p)
      File "H:\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 620, in process_images
        res = process_images_inner(p)
      File "H:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "H:\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 739, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "H:\Stable Diffusion\stable-diffusion-webui\modules\processing.py", line 992, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "H:\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 439, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
      File "H:\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 278, in launch_sampling
        return func()
      File "H:\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 439, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
      File "H:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "H:\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 518, in sample_dpmpp_2s_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "H:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\Stable Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 158, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
      File "H:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "H:\Stable Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "H:\Stable Diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "H:\Stable Diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "H:\Stable Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "H:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\Stable Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "H:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\Stable Diffusion\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
        return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
      File "H:\Stable Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "H:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 21, in mm_tes_forward
        x = layer(x, context)
      File "H:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 79, in forward
        hidden_states = self.temporal_transformer(hidden_states, encoder_hidden_states, attention_mask)
      File "H:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 151, in forward
        hidden_states = block(hidden_states, encoder_hidden_states=encoder_hidden_states, video_length=video_length)
      File "H:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 215, in forward
        hidden_states = attention_block(
      File "H:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "H:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 539, in forward
        hidden_states = self._memory_efficient_attention_xformers(query, key, value, attention_mask)
      File "H:\Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 468, in _memory_efficient_attention_xformers
        hidden_states = xformers.ops.memory_efficient_attention(query, key, value, attn_bias=attention_mask,
      File "H:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 192, in memory_efficient_attention
        return _memory_efficient_attention(
      File "H:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 290, in _memory_efficient_attention
        return _memory_efficient_attention_forward(
      File "H:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 310, in _memory_efficient_attention_forward
        out, *_ = op.apply(inp, needs_gradient=False)
      File "H:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\cutlass.py", line 175, in apply
        out, lse, rng_seed, rng_offset = cls.OPERATOR(
      File "H:\Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\_ops.py", line 502, in __call__
        return self._op(*args, **kwargs or {})
    RuntimeError: CUDA error: invalid configuration argument
    CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
    For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
    Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


---

Additional information

Without the weights, the program will generate 1 gif of random images before giving this error. After installing weights, it only gives this error.

[Bug]: RuntimeError: mm_sd_v15.ckpt hash mismatch. You probably need to re-download the motion module.

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

*** Error running before_process: C:\github\diffusers\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py
Traceback (most recent call last):
File "C:\github\diffusers\stable-diffusion-webui\modules\scripts.py", line 487, in before_process
script.before_process(p, *script_args)
File "C:\github\diffusers\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 138, in before_process
self.inject_motion_modules(p, model)
File "C:\github\diffusers\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 87, in inject_motion_modules
raise RuntimeError(f"{model_name} hash mismatch. You probably need to re-download the motion module.")
RuntimeError: mm_sd_v15.ckpt hash mismatch. You probably need to re-download the motion module.

Steps to reproduce the problem

  1. installed animatediff extension
  2. download the motion model
  3. enabled and typed the prompt

What should have happened?

no error i guess?

Commit where the problem happens

webui:
extension:

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

no

Console logs

2023-07-27 05:38:29,172 - AnimateDiff - INFO - AnimateDiff process start with video Max frames 16, FPS 8, duration 2.0,  motion module mm_sd_v15.ckpt.
*** Error running before_process: C:\github\diffusers\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py
    Traceback (most recent call last):
      File "C:\github\diffusers\stable-diffusion-webui\modules\scripts.py", line 487, in before_process
        script.before_process(p, *script_args)
      File "C:\github\diffusers\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 138, in before_process
        self.inject_motion_modules(p, model)
      File "C:\github\diffusers\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 87, in inject_motion_modules
        raise RuntimeError(f"{model_name} hash mismatch. You probably need to re-download the motion module.")
    RuntimeError: mm_sd_v15.ckpt hash mismatch. You probably need to re-download the motion module.

Additional information

No response

CUDA error: invalid configuration argument

Hi there,

I go this error, please help to take a look, thanks!
By the way, I have already downloaded mm_sd_v14/15.ckpt and put them in the right folder.


raceback (most recent call last):
File "E:\sd-webui\modules\call_queue.py", line 55, in f
res = list(func(*args, **kwargs))
File "E:\sd-webui\modules\call_queue.py", line 35, in f
res = func(*args, **kwargs)
File "E:\sd-webui\modules\txt2img.py", line 57, in txt2img
processed = processing.process_images(p)
File "E:\sd-webui\modules\processing.py", line 620, in process_images
res = process_images_inner(p)
File "E:\sd-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "E:\sd-webui\modules\processing.py", line 739, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "E:\sd-webui\modules\processing.py", line 992, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "E:\sd-webui\modules\sd_samplers_kdiffusion.py", line 439, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "E:\sd-webui\modules\sd_samplers_kdiffusion.py", line 278, in launch_sampling
return func()
File "E:\sd-webui\modules\sd_samplers_kdiffusion.py", line 439, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "E:\sd-webui\python\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\sd-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "E:\sd-webui\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\sd-webui\modules\sd_samplers_kdiffusion.py", line 158, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
File "E:\sd-webui\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\sd-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "E:\sd-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "E:\sd-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "E:\sd-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "E:\sd-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "E:\sd-webui\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\sd-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "E:\sd-webui\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\sd-webui\modules\sd_unet.py", line 91, in UNetModel_forward
return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
File "E:\sd-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
h = module(h, emb, context)
File "E:\sd-webui\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\sd-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 21, in mm_tes_forward
x = layer(x, context)
File "E:\sd-webui\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\sd-webui\extensions\sd-webui-animatediff\motion_module.py", line 79, in forward
hidden_states = self.temporal_transformer(hidden_states, encoder_hidden_states, attention_mask)
File "E:\sd-webui\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\sd-webui\extensions\sd-webui-animatediff\motion_module.py", line 151, in forward
hidden_states = block(hidden_states, encoder_hidden_states=encoder_hidden_states, video_length=video_length)
File "E:\sd-webui\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\sd-webui\extensions\sd-webui-animatediff\motion_module.py", line 215, in forward
hidden_states = attention_block(
File "E:\sd-webui\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\sd-webui\extensions\sd-webui-animatediff\motion_module.py", line 539, in forward
hidden_states = self._memory_efficient_attention_xformers(query, key, value, attention_mask)
File "E:\sd-webui\extensions\sd-webui-animatediff\motion_module.py", line 468, in memory_efficient_attention_xformers
hidden_states = xformers.ops.memory_efficient_attention(query, key, value, attn_bias=attention_mask,
File "E:\sd-webui\python\lib\site-packages\xformers\ops\fmha_init
.py", line 192, in memory_efficient_attention
return memory_efficient_attention(
File "E:\sd-webui\python\lib\site-packages\xformers\ops\fmha_init
.py", line 290, in _memory_efficient_attention
return memory_efficient_attention_forward(
File "E:\sd-webui\python\lib\site-packages\xformers\ops\fmha_init
.py", line 310, in memory_efficient_attention_forward
out, *
= op.apply(inp, needs_gradient=False)
File "E:\sd-webui\python\lib\site-packages\xformers\ops\fmha\cutlass.py", line 175, in apply
out, lse, rng_seed, rng_offset = cls.OPERATOR(
File "E:\sd-webui\python\lib\site-packages\torch_ops.py", line 502, in call
return self._op(*args, **kwargs or {})
RuntimeError: CUDA error: invalid configuration argument
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions

RuntimeError

Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [1280] and input of shape [16, 2560, 9, 9]

I have the mm_sd_v15.ckpt downloaded in stable-diffusion-webui\extensions\sd-webui-animatediff\model.

Complete log:

2023-07-18 11:19:21,548 - AnimateDiff - INFO - AnimateDiff process start with video length 2, FPS 8, motion module mm_sd_v15.ckpt.
2023-07-18 11:19:21,551 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet input blocks.
2023-07-18 11:19:21,552 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet output blocks.
2023-07-18 11:19:21,552 - AnimateDiff - INFO - Injection finished.
0%| | 0/30 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(ba8qq6a5sxydhr6)', 'Beautiful Scenery', '', [], 30, 16, False, False, 1, 2, 7.5, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.33, 1.5, '4x_UniversalUpscalerV2-Sharper_103000_G', 10, 0, 0, 19, '', '', [], <gradio.routes.Request object at 0x000002E7E74F5420>, 0, 0, False, 'Horizontal', '1,1', False, '0.2', False, False, 'female', True, 1, True, -1.0, [], [], [], [], False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': <object object at 0x000002E7E6FB5740>}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': <object object at 0x000002E7E6FB5720>}, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 2, 8, 'mm_sd_v15.ckpt', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E7E5CB6B60>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E7E5C86980>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E7E750FE20>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002E7E750CA90>, None, False, '0', 'G:\stablediffusion\stable-diffusion-webui\extensions/sd-webui-faceswap/models\inswapper_128.onnx', 'CodeFormer', 1, '', 1, 1, False, True, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, 0, 1, 1, 0, 0, 0, 0, False, 'Default', False, False, 'Euler a', 0.95, 0.75, 'zero', 'pos', 'linear', 0.2, 0.0, 0.75, None, 'Lanczos', 1, 0, 0, True, 0.3, 'Latent', 0.55, 0.3, 0.2, 0.2, [], False, 1.5, 1.2, False, '', '1', 'from modules.processing import process_images\n\np.width = 768\np.height = 768\np.batch_size = 2\np.steps = 10\n\nreturn process_images(p)', 2, 0, 0, 384, 384, False, False, True, True, True, 1, '', '', 8, True, 16, 'Median cut', False, None, None, '', '', '', '', 'Auto rename', {'label': 'Upload avatars config'}, 'Open outputs directory', 'Export to WebUI style', True, {'label': 'Presets'}, {'label': 'QC preview'}, '', [], 'Select', 'QC scan', 'Show pics', None, False, False, 'positive', 'comma', 0, False, False, '', 'Positive', 0, ', ', True, 32, 0, 'Median cut', 'luminance', False, 'Illustration', 'svg', True, True, False, 0.5, True, 16, True, 16, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.1, 'Not set', 1.1, 'Not set', 1.1, 'Not set', 1.1, 'Not set', 1.1, 'Not set', 1.1, 'Not set', False, 'None', 'Not set', True, False, '', '', '', '', '', 1.3, 'Not set', 'Not set', 'Not set', 1, 1.3, 'Not set', 'Not set', 'Not set', 'Not set', 'Not set', 'Not set', 1.3, 1.3, 1.3, 'Not set', 'Not set', 1.3, True, True, 'Disabled', None, None, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "G:\stablediffusion\stable-diffusion-webui\modules\call_queue.py", line 58, in f
res = list(func(*args, **kwargs))
File "G:\stablediffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\modules\txt2img.py", line 62, in txt2img
processed = processing.process_images(p)
File "G:\stablediffusion\stable-diffusion-webui\modules\processing.py", line 639, in process_images
res = process_images_inner(p)
File "G:\stablediffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\modules\processing.py", line 759, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "G:\stablediffusion\stable-diffusion-webui\modules\processing.py", line 1012, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 464, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 303, in launch_sampling
return func()
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 464, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 183, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
h = module(h, emb, context)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 21, in mm_tes_forward
x = layer(x, context)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 79, in forward
hidden_states = self.temporal_transformer(hidden_states, encoder_hidden_states, attention_mask)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 144, in forward
hidden_states = self.norm(hidden_states)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
return F.group_norm(
File "G:\stablediffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [1280] and input of shape [4, 2560, 8, 8]


[Bug]: GIFs not working in telegram

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

GIF's created through [sd-webui-animatediff] are not working in telegram. Only after I push it through photoshop (or any other method of gif re-making) it starts playing like normal.

Steps to reproduce the problem

  1. Make a GIF in automatic1111 with [sd-webui-animatediff] enabled,
  2. Send it to someone on telegram
  3. For both sides GIF is just a static picture, and it requires to go to files and check it manually.

What should have happened?

Normally it should just be a normal GIF,

Commit where the problem happens

webui: , automatic1111

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

any, really.

Console logs

there is no errors.

Additional information

My guess, it is related to GIF's format, something is missing or corrupted inside GIF and some devices plays it normally, some (telegram, for example) are not.

[Bug]: I get a collections of different image's base on the prompt instead of the gif.

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

I use the same positive, negative prompt and parameters from the exa,mple. Instead of getting motion, I get a collections of different image's base on the prompt instead of the gif.

image
00002-7132772652786303

Steps to reproduce the problem

Run as usual

What should have happened?

Generate a motion gif

Commit where the problem happens

webui: latest
extension: latest

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS= --listen --api --no-half-vae --disable-nan-check --enable-insecure-extension-access 
call webui.bat

Console logs

[AddNet] Updating model hashes...
0it [00:00, ?it/s]
preload_extensions_git_metadata for 18 extensions took 1.22s
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 23.9s (import torch: 1.7s, import gradio: 1.6s, import ldm: 0.5s, other imports: 0.9s, load scripts: 1.7s, create ui: 12.8s, gradio launch: 4.6s).
2023-07-24 16:01:10,724 - AnimateDiff - INFO - AnimateDiff process start with video Max frames 16, FPS 8, duration 2.0,  motion module mm_sd_v15.ckpt.
2023-07-24 16:01:10,724 - AnimateDiff - INFO - Loading motion module mm_sd_v15.ckpt from C:\Gits\stable-diffusion-webui\extensions\sd-webui-animatediff\model\mm_sd_v15.ckpt
*** Error verifying pickled file from C:\Gits\stable-diffusion-webui\extensions\sd-webui-animatediff\model\mm_sd_v15.ckpt
*** -----> !!!! The file is most likely corrupted !!!! <-----
*** You can skip this check with --disable-safe-unpickle commandline argument, but that is not going to help you.
***
    Traceback (most recent call last):
      File "C:\Gits\stable-diffusion-webui\modules\safe.py", line 83, in check_pt
        with zipfile.ZipFile(filename) as z:
      File "C:\Gits\anaconda3\lib\zipfile.py", line 1269, in __init__
        self._RealGetContents()
      File "C:\Gits\anaconda3\lib\zipfile.py", line 1336, in _RealGetContents
        raise BadZipFile("File is not a zip file")
    zipfile.BadZipFile: File is not a zip file

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
      File "C:\Gits\stable-diffusion-webui\modules\safe.py", line 137, in load_with_extra
        check_pt(filename, extra_handler)
      File "C:\Gits\stable-diffusion-webui\modules\safe.py", line 104, in check_pt
        unpickler.load()
    _pickle.UnpicklingError: invalid load key, '<'.

---
*** Error running before_process: C:\Gits\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py
    Traceback (most recent call last):
      File "C:\Gits\stable-diffusion-webui\modules\scripts.py", line 466, in before_process
        script.before_process(p, *script_args)
      File "C:\Gits\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 116, in before_process
        self.inject_motion_modules(p, model)
      File "C:\Gits\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 78, in inject_motion_modules
        missed_keys = AnimateDiffScript.motion_module.load_state_dict(mm_state_dict)
      File "C:\Gits\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1994, in load_state_dict
        raise TypeError("Expected state_dict to be dict-like, got {}.".format(type(state_dict)))
    TypeError: Expected state_dict to be dict-like, got <class 'NoneType'>.

---
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:17<00:00,  1.12it/s]
2023-07-24 16:01:37,268 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet input blocks.███████████████████████████████████████████████████████████████████████████████| 20/20 [00:16<00:00,  1.18it/s]
2023-07-24 16:01:37,268 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet output blocks.
2023-07-24 16:01:37,268 - AnimateDiff - INFO - Removal finished.
2023-07-24 16:01:37,269 - AnimateDiff - INFO - Merging images into GIF.
2023-07-24 16:01:38,818 - AnimateDiff - INFO - AnimateDiff process end.
Total progress: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:24<00:00,  1.21s/it]
Total progress: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:24<00:00,  1.18it/s]

Additional information

No response

[Bug]: Weight size not equal to the number of channels of in input

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

I have follow the installation step in readme. And after enable the extension, and click run button. "RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [2560] and input of shape [2, 5120, 8, 8]". Even the worse is that after disable the extension, i still get this error. I must restart the webui to back to normal.

Steps to reproduce the problem

  1. Install the extension
  2. Press the run button
  3. an image generated normaly
  4. Press the run button
  5. Error

What should have happened?

It should run to generate 16 frames, but first time it generate one image normaly, second time it crash and can't be recovered after extension disabled.

Commit where the problem happens

webui: python: 3.10.6  •  torch: 1.13.1+cu117  •  xformers: 0.0.17+6967620.d20230407  •  gradio: 3.31.0  •  checkpoint: [cca17b08da]
extension:

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

No

Console logs

0%|          | 0/20 [00:00<?, ?it/s]
Error completing request
Arguments: ('task(3bjc7304jac9b0j)', 'masterpiece, best quality,ultra detailed,', 'nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, 'masterpiece, best quality,ultra detailed,', 'nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry', [], 0, True, 0, 8, 8, 'mm_sd_v15.ckpt', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x7fb5db6472e0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x7faffa511ae0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x7fb5de75f220>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x7faffb1834c0>, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
  File "/data/stable-diffusion-webui/modules/call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "/data/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/data/stable-diffusion-webui/modules/txt2img.py", line 57, in txt2img
    processed = processing.process_images(p)
  File "/data/stable-diffusion-webui/modules/processing.py", line 611, in process_images
    res = process_images_inner(p)
  File "/data/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "/data/stable-diffusion-webui/modules/processing.py", line 729, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "/data/stable-diffusion-webui/modules/processing.py", line 977, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "/data/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 383, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/data/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 257, in launch_sampling
    return func()
  File "/data/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 383, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/data/miniconda3/envs/env-novelai/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/data/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/data/miniconda3/envs/env-novelai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 137, in forward
    x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
  File "/data/miniconda3/envs/env-novelai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "/data/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "/data/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "/data/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "/data/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 929, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "/data/miniconda3/envs/env-novelai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1407, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "/data/miniconda3/envs/env-novelai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 802, in forward
    h = module(h, emb, context)
  File "/data/miniconda3/envs/env-novelai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff.py", line 19, in mm_tes_forward
    x = layer(x, emb)
  File "/data/miniconda3/envs/env-novelai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 249, in forward
    return checkpoint(
  File "/data/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 121, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "/data/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 136, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "/data/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 262, in _forward
    h = self.in_layers(x)
  File "/data/miniconda3/envs/env-novelai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/miniconda3/envs/env-novelai/lib/python3.10/site-packages/torch/nn/modules/container.py", line 204, in forward
    input = module(input)
  File "/data/miniconda3/envs/env-novelai/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "/data/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "/data/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 226, in forward
    return super().forward(x.float()).type(x.dtype)
  File "/data/miniconda3/envs/env-novelai/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 273, in forward
    return F.group_norm(
  File "/data/miniconda3/envs/env-novelai/lib/python3.10/site-packages/torch/nn/functional.py", line 2528, in group_norm
    return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [2560] and input of shape [2, 5120, 8, 8]

Additional information

No response

[Bug]: Attribute error

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

An AttributeError is thrown.

Steps to reproduce the problem

Press generate while extension is enabled

What should have happened?

It should not throw the error

Commit where the problem happens

webui: v1.4.1
extension: e8c88a4

What browsers do you use to access the UI ?

No response

Command Line Arguments

set COMMANDLINE_ARGS=--listen --api --enable-insecure-extension-access

Console logs

2023-07-18 19:38:03,650 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet input blocks.
Error running postprocess: C:\Users\TooDee\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py
Traceback (most recent call last):
  File "C:\Users\TooDee\stable-diffusion-webui\modules\scripts.py", line 404, in postprocess
    script.postprocess(p, processed, *script_args)
  File "C:\Users\TooDee\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 142, in postprocess
    self.remove_motion_modules(p)
  File "C:\Users\TooDee\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 122, in remove_motion_modules
    unet.input_blocks[unet_idx].pop(-1)
  File "C:\Users\TooDee\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1207, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'TimestepEmbedSequential' object has no attribute 'pop'


### Additional information

_No response_

[Bug]: prompt character limit issue

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

Long prompts cause the image to change after 8 frames

The original repo can handle longer prompts before this happens. Using the same prompt on this plugin version doesn't work.

Steps to reproduce the problem

A photo of a female with a voluminous afro hairstyle, dyed in vibrant shades of
teal and purple. She wears a stylish streetwear ensemble, consisting of a graphic
t-shirt, oversized denim jacket, and high-waisted jogger pants. A backdrop of
colorful street art and neon signs reflects the vibrant energy of the urban environment
she inhabits. Epic character composition, sharp focus and natural lighting. The
subsurface scattering effect adds a touch of ethereal glow, while the f2 aperture
and 35mm lens create a perfect balance of depth and detail

What should have happened?

It should generate a single stable animation that doesn't toggle midway through.

Commit where the problem happens

webui: 1.4.1
extension: [287b30a] Mon Jul 24 16:07:19 2023

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

No

Console logs

No error message

Additional information

No response

[Bug]: vram not cleaned after a failed attempt to allocate memory.

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

The extension doesn't clear the vram after a failed attempt to allocate memory. I have to restart automatic 1111 completely to clear the vram.

Steps to reproduce the problem

  1. Max out the vram
  2. try to generate anything after that

What should have happened?

vram should be cleaned after a failed attempt.

Commit where the problem happens

webui: f865d3e11647dfd6c7b2cdf90dde24680e58acd8
extension:

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

--no-half-vae --api

Console logs

well, the classic OOM error:

 torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.50 GiB (GPU 0; 24.00 GiB total capacity; 12.83 GiB already allocated; 3.16 GiB free; 17.42 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Additional information

No response

[Bug]: Apple M1 crash

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

Using 32GB M1 Max.

Process gets to generating percentage stage and then python crashes at 0%.

Can't see mention of mac compatibility anywhere but assume it's not compatible with MPS?

Steps to reproduce the problem

txt2Img or img2img.
various SD1.5 models (inc default)
Euler a (and others).
256 x 256.
mm_sd_v14.ckpt or mm_sd_v15.ckpt,
with / without Move motion module to CPU

What should have happened?

Shouldn't crash.

Commit where the problem happens

webui: 68f336b
extension: 48fc19d

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

No (default).

Console logs

2023-08-03 13:08:45,523 - AnimateDiff - INFO - AnimateDiff process start with video Max frames 16, FPS 8, duration 2.0,  motion module mm_sd_v14.ckpt.
2023-08-03 13:08:45,523 - AnimateDiff - INFO - Loading motion module mm_sd_v14.ckpt from /Users/js/stable-diffusion-webui/extensions/sd-webui-animatediff/model/mm_sd_v14.ckpt
2023-08-03 13:08:50,893 - AnimateDiff - WARNING - Missing keys <All keys matched successfully>
2023-08-03 13:08:51,316 - AnimateDiff - INFO - Hacking GroupNorm32 forward function.
2023-08-03 13:08:51,316 - AnimateDiff - INFO - Injecting motion module mm_sd_v14.ckpt into SD1.5 UNet input blocks.
2023-08-03 13:08:51,316 - AnimateDiff - INFO - Injecting motion module mm_sd_v14.ckpt into SD1.5 UNet output blocks.
2023-08-03 13:08:51,316 - AnimateDiff - INFO - Injection finished.
  0%|                                                                                                                                            | 0/25 [00:00<?, ?it/s]loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/97f6331a-ba75-11ed-a4bc-863efbbaf80d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":228:0)): error: input types 'tensor<32x1024x320xf16>' and 'tensor<320xf32>' are not broadcast compatible
LLVM ERROR: Failed to infer result type(s).
./webui.sh: line 254: 46832 Abort trap: 6           "${python_cmd}" "${LAUNCH_SCRIPT}" "$@"

Additional information

No response

[Bug]: AnimateDIFF do nothing

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

I work with the standalone version of AnimateDIFF and it's worked fine on my RTX 3090. Today I installed your adaptation to automatic1111.
I put the motion models and checked the Enable AnimateDiff option, but it only makes the regular text2img process as a PNG file.
I'm not sure what's going wrong.

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

What should have happened?

make a gif file

Commit where the problem happens

webui: Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.4.1
Commit hash: f865d3e11647dfd6c7b2cdf90dde24680e58acd8
Installing requirements
extension:

What browsers do you use to access the UI ?

Brave

Command Line Arguments

---
2023-07-20 09:22:05,466 - ControlNet - INFO - ControlNet v1.1.224
ControlNet preprocessor location: P:\automatic1111webui\webui\extensions\sd-webui-controlnet\annotator\downloads
2023-07-20 09:22:05,561 - ControlNet - INFO - ControlNet v1.1.224
Image Browser: ImageReward is not installed, cannot be used.
Loading weights [c0d1994c73] from P:\automatic1111webui\webui\models\Stable-diffusion\realisticVisionV20_v20.safetensors
*Deforum ControlNet support: enabled*
Creating model from config: P:\automatic1111webui\webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
COMMANDLINE_ARGS does not contain --api, API won't be mounted.
Startup time: 10.7s (import torch: 1.7s, import gradio: 1.3s, import ldm: 0.4s, other imports: 1.0s, load scripts: 2.9s, create ui: 2.1s, gradio launch: 1.2s).
*** Failed reading extension data from Git repository (clip-interrogator-ext)
    Traceback (most recent call last):
      File "P:\automatic1111webui\webui\modules\extensions.py", line 62, in do_read_info_from_repo
        self.remote = next(repo.remote().urls, None)
      File "P:\automatic1111webui\system\python\lib\site-packages\git\repo\base.py", line 414, in remote
        raise ValueError("Remote named '%s' didn't exist" % name)
    ValueError: Remote named 'origin' didn't exist

---
*** Failed reading extension data from Git repository (sd-webui-controlnet)
    Traceback (most recent call last):
      File "P:\automatic1111webui\webui\modules\extensions.py", line 62, in do_read_info_from_repo
        self.remote = next(repo.remote().urls, None)
      File "P:\automatic1111webui\system\python\lib\site-packages\git\repo\base.py", line 414, in remote
        raise ValueError("Remote named '%s' didn't exist" % name)
    ValueError: Remote named 'origin' didn't exist

---
*** Failed reading extension data from Git repository (seed_travel)
    Traceback (most recent call last):
      File "P:\automatic1111webui\webui\modules\extensions.py", line 62, in do_read_info_from_repo
        self.remote = next(repo.remote().urls, None)
      File "P:\automatic1111webui\system\python\lib\site-packages\git\repo\base.py", line 414, in remote
        raise ValueError("Remote named '%s' didn't exist" % name)
    ValueError: Remote named 'origin' didn't exist

---
*** Failed reading extension data from Git repository (stable-diffusion-webui-images-browser)
    Traceback (most recent call last):
      File "P:\automatic1111webui\webui\modules\extensions.py", line 62, in do_read_info_from_repo
        self.remote = next(repo.remote().urls, None)
      File "P:\automatic1111webui\system\python\lib\site-packages\git\repo\base.py", line 414, in remote
        raise ValueError("Remote named '%s' didn't exist" % name)
    ValueError: Remote named 'origin' didn't exist

---
*** Failed reading extension data from Git repository (stable-diffusion-webui-rembg)
    Traceback (most recent call last):
      File "P:\automatic1111webui\webui\modules\extensions.py", line 62, in do_read_info_from_repo
        self.remote = next(repo.remote().urls, None)
      File "P:\automatic1111webui\system\python\lib\site-packages\git\repo\base.py", line 414, in remote
        raise ValueError("Remote named '%s' didn't exist" % name)
    ValueError: Remote named 'origin' didn't exist

---
preload_extensions_git_metadata for 25 extensions took 2.20s
Applying attention optimization: Doggettx... done.
Textual inversion embeddings loaded(0):
Model loaded in 5.7s (load weights from disk: 1.5s, create model: 0.7s, apply weights to model: 1.6s, apply half(): 0.7s, move model to device: 1.2s).
100%|████████████████████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00,  7.78it/s]
Total progress: 100%|████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 11.38it/s]
100%|████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 10.57it/s]
Total progress: 100%|████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 11.49it/s]
Total progress: 100%|████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 12.19it/s]

Console logs

Failed to load resource: the server responded with a status of 404 (Not Found)
:7860/favicon.ico:1     Failed to load resource: the server responded with a status of 403 (Forbidden)
127.0.0.1/:1 Uncaught (in promise) Error: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received
3canvas-zoom.js?1689829204.6660552:1186 work

Additional information

No response

[Bug]: Not working as expected

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

Errors are thrown up during generation, results not as expected.

Steps to reproduce the problem

Run according to instructions, both txt2img and img2img

What should have happened?

Generate result as indicated

Commit where the problem happens

webui: f865d3e11647dfd6c7b2cdf90dde24680e58acd8
extension: 88a04c3

What browsers do you use to access the UI ?

No response

Command Line Arguments

None

Console logs

2023-07-23 20:08:16,895 - AnimateDiff - INFO - AnimateDiff process start with video Max frames 16, FPS 25, duration 0.64,  motion module mm_sd_v15.ckpt.
2023-07-23 20:08:16,895 - AnimateDiff - INFO - Loading motion module mm_sd_v15.ckpt from C:\AI\stable-diffusion-webui\extensions\sd-webui-animatediff\model\mm_sd_v15.ckpt
2023-07-23 20:08:20,731 - AnimateDiff - WARNING - Missing keys <All keys matched successfully>
2023-07-23 20:08:21,179 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet input blocks.
2023-07-23 20:08:21,179 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet output blocks.
2023-07-23 20:08:21,180 - AnimateDiff - INFO - Injection finished.
 25%|█████████████████████                                                               | 1/4 [00:09<00:29,  9.97s/it]Exception in callback H11Protocol.timeout_keep_alive_handler()                                    | 0/4 [00:00<?, ?it/s]
handle: <TimerHandle when=21760.078 H11Protocol.timeout_keep_alive_handler()>
Traceback (most recent call last):
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\h11\_state.py", line 249, in _fire_event_triggered_transitions
    new_state = EVENT_TRIGGERED_TRANSITIONS[role][state][event_type]
KeyError: <class 'h11._events.ConnectionClosed'>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\d3xp2\AppData\Local\Programs\Python\Python310\lib\asyncio\events.py", line 80, in _run
    self._context.run(self._callback, *self._args)
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 383, in timeout_keep_alive_handler
    self.conn.send(event)
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\h11\_connection.py", line 468, in send
    data_list = self.send_with_data_passthrough(event)
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\h11\_connection.py", line 493, in send_with_data_passthrough
    self._process_event(self.our_role, event)
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\h11\_connection.py", line 242, in _process_event
    self._cstate.process_event(role, type(event), server_switch_event)
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\h11\_state.py", line 238, in process_event
    self._fire_event_triggered_transitions(role, event_type)
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\h11\_state.py", line 251, in _fire_event_triggered_transitions
    raise LocalProtocolError(
h11._util.LocalProtocolError: can't handle event type ConnectionClosed when role=SERVER and state=SEND_RESPONSE
API error: POST: http://127.0.0.1:7860/api/predict {'error': 'LocalProtocolError', 'detail': '', 'body': '', 'errors': "Can't send data when our state is ERROR"}
                                                                                                                       ╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py:162 in __call__                   │
│                                                                                                                      │
│ C:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py:109 in __call__                     │
│                                                                                                                      │
│                                               ... 7 frames hidden ...                                                │
│                                                                                                                      │
│ C:\AI\stable-diffusion-webui\venv\lib\site-packages\h11\_connection.py:468 in send                                   │
│                                                                                                                      │
│   467 │   │   """                                                                                                    │
│ ❱ 468 │   │   data_list = self.send_with_data_passthrough(event)                                                     │
│   469 │   │   if data_list is None:                                                                                  │
│                                                                                                                      │
│ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │
│ │ event = Response(status_code=200, headers=<Headers([(b'date', b'Sun, 23 Jul 2023 19:09:05 GMT'), (b'server',     │ │
│ │         b'uvicorn'), (b'content-length', b'22667'), (b'content-type', b'application/json'), (b'x-process-time',  │ │
│ │         b'7.2564')])>, http_version=b'1.1', reason=b'OK')                                                        │ │
│ │  self = <h11._connection.Connection object at 0x00000200E8A84E80>                                                │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │
│                                                                                                                      │
│ C:\AI\stable-diffusion-webui\venv\lib\site-packages\h11\_connection.py:483 in send_with_data_passthrough             │
│                                                                                                                      │
│   482 │   │   if self.our_state is ERROR:                                                                            │
│ ❱ 483 │   │   │   raise LocalProtocolError("Can't send data when our state is ERROR")                                │
│   484 │   │   try:                                                                                                   │
│                                                                                                                      │
│ ╭───────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────╮ │
│ │ event = Response(status_code=200, headers=<Headers([(b'date', b'Sun, 23 Jul 2023 19:09:05 GMT'), (b'server',     │ │
│ │         b'uvicorn'), (b'content-length', b'22667'), (b'content-type', b'application/json'), (b'x-process-time',  │ │
│ │         b'7.2564')])>, http_version=b'1.1', reason=b'OK')                                                        │ │
│ │  self = <h11._connection.Connection object at 0x00000200E8A84E80>                                                │ │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
LocalProtocolError: Can't send data when our state is ERROR
 50%|██████████████████████████████████████████                                          | 2/4 [00:21<00:21, 10.75s/it]ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 428, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 273, in __call__
    await super().__call__(scope, receive, send)
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 109, in __call__
    await response(scope, receive, send)
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\responses.py", line 270, in __call__
    async with anyio.create_task_group() as task_group:
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 597, in __aexit__
    raise exceptions[0]
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\responses.py", line 273, in wrap
    await func()
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 134, in stream_response
    return await super().stream_response(send)
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\responses.py", line 255, in stream_response
    await send(
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 159, in _send
    await send(message)
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 512, in send
    output = self.conn.send(event)
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\h11\_connection.py", line 468, in send
    data_list = self.send_with_data_passthrough(event)
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\h11\_connection.py", line 483, in send_with_data_passthrough
    raise LocalProtocolError("Can't send data when our state is ERROR")
h11._util.LocalProtocolError: Can't send data when our state is ERROR
Task exception was never retrieved
future: <Task finished name='dhvudlkwb2_590' coro=<Queue.process_events() done, defined at C:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\queueing.py:343> exception=ValueError('[<gradio.queueing.Event object at 0x00000200E8A878B0>] is not in list')>
Traceback (most recent call last):
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\queueing.py", line 370, in process_events
    while response.json.get("is_generating", False):
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 538, in json
    return self._json_response_data
AttributeError: 'AsyncRequest' object has no attribute '_json_response_data'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\queueing.py", line 432, in process_events
    self.active_jobs[self.active_jobs.index(events)] = None
ValueError: [<gradio.queueing.Event object at 0x00000200E8A878B0>] is not in list
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:43<00:00, 10.94s/it]
2023-07-23 20:09:39,536 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet input blocks.33<00:00,  9.21s/it]
2023-07-23 20:09:39,537 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet output blocks.
2023-07-23 20:09:39,538 - AnimateDiff - INFO - Removal finished.
2023-07-23 20:09:39,538 - AnimateDiff - INFO - Merging images into GIF.
2023-07-23 20:09:41,505 - AnimateDiff - INFO - AnimateDiff process end.
Total progress: 100%|████████████████████████████████████████████████████████████████████| 4/4 [00:40<00:00, 10.06s/it]
Total progress: 100%|████████████████████████████████████████████████████████████████████| 4/4 [00:40<00:00,  9.21s/it]

Additional information

No response

[Feature]: How do I use the API to call this function? thanks.

Expected behavior

payload = {
"prompt": aPrompt,
"negative_prompt": aNprompt,
"width": 512,
"height": 768,
"steps": 25,
"sampler_index": "Euler a"
}

response = requests.post(url=f'{url}/sdapi/v1/txt2img', json=payload)


This is a request to call a txt2img, but I'm not sure how to call "AnimateDiff" here to generate an animation. thanks

Errors when i try to use it!

Expected behavior

Got all this errors any idea how to solve it or what is the cause?

To create a public link, set share=True in launch().
Startup time: 16.5s (import torch: 3.6s, import gradio: 1.8s, import ldm: 0.8s, other imports: 2.2s, setup codeformer: 0.1s, list SD models: 0.2s, load scripts: 4.2s, create ui: 2.8s, gradio launch: 0.5s, add APIs: 0.2s).
Applying attention optimization: xformers... done.
Textual inversion embeddings loaded(0):
Model loaded in 8.6s (load weights from disk: 1.7s, create model: 1.3s, apply weights to model: 2.8s, apply half(): 1.4s, move model to device: 1.4s, calculate empty prompt: 0.1s).
preload_extensions_git_metadata for 17 extensions took 10.29s
2023-07-18 14:49:28,013 - AnimateDiff - INFO - AnimateDiff process start with video length 16, FPS 8, motion module mm_sd_v15.ckpt.
2023-07-18 14:49:28,014 - AnimateDiff - INFO - Loading motion module mm_sd_v15.ckpt from F:\stable-diffusion-webui\extensions\sd-webui-animatediff\model\mm_sd_v15.ckpt
2023-07-18 14:49:33,197 - AnimateDiff - WARNING - Missing keys
2023-07-18 14:49:33,740 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet input blocks.
2023-07-18 14:49:33,740 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet output blocks.
2023-07-18 14:49:33,744 - AnimateDiff - INFO - Injection finished.
Data shape for DDIM sampling is (16, 4, 64, 64), eta 0.0
Running DDIM Sampling with 20 timesteps
DDIM Sampler: 0%| | 0/20 [00:01<?, ?it/s]
*** Error completing request
*** Arguments: ('task(xs9uqqavgtd1wd5)', 'a girl portrait with the hair in the wind.', '', [], 20, 19, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', True, 16, 8, 'mm_sd_v15.ckpt', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000027DBA219000>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000027DBA218670>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000027DCA1B0340>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "F:\stable-diffusion-webui\modules\call_queue.py", line 55, in f
res = list(func(*args, **kwargs))
File "F:\stable-diffusion-webui\modules\call_queue.py", line 35, in f
res = func(*args, **kwargs)
File "F:\stable-diffusion-webui\modules\txt2img.py", line 57, in txt2img
processed = processing.process_images(p)
File "F:\stable-diffusion-webui\modules\processing.py", line 620, in process_images
res = process_images_inner(p)
File "F:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "F:\stable-diffusion-webui\modules\processing.py", line 739, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "F:\stable-diffusion-webui\modules\processing.py", line 992, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "F:\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 222, in sample
samples_ddim = self.launch_sampling(steps, lambda: self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta)[0])
File "F:\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 51, in launch_sampling
return func()
File "F:\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 222, in
samples_ddim = self.launch_sampling(steps, lambda: self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta)[0])
File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "F:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 104, in sample
samples, intermediates = self.ddim_sampling(conditioning, size,
File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "F:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 164, in ddim_sampling
outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
File "F:\stable-diffusion-webui\modules\sd_samplers_compvis.py", line 58, in p_sample_ddim_hook
res = self.orig_p_sample_ddim(x_dec, cond, ts, *args, unconditional_conditioning=unconditional_conditioning, **kwargs)
File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "F:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 212, in p_sample_ddim
model_uncond, model_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
File "F:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "F:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "F:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
File "F:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
h = module(h, emb, context)
File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 21, in mm_tes_forward
x = layer(x, context)
File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 79, in forward
hidden_states = self.temporal_transformer(hidden_states, encoder_hidden_states, attention_mask)
File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 151, in forward
hidden_states = block(hidden_states, encoder_hidden_states=encoder_hidden_states, video_length=video_length)
File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 215, in forward
hidden_states = attention_block(
File "F:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 539, in forward
hidden_states = self._memory_efficient_attention_xformers(query, key, value, attention_mask)
File "F:\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 468, in memory_efficient_attention_xformers
hidden_states = xformers.ops.memory_efficient_attention(query, key, value, attn_bias=attention_mask,
File "F:\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 192, in memory_efficient_attention
return memory_efficient_attention(
File "F:\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 290, in _memory_efficient_attention
return memory_efficient_attention_forward(
File "F:\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 310, in memory_efficient_attention_forward
out, *
= op.apply(inp, needs_gradient=False)
File "F:\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\cutlass.py", line 175, in apply
out, lse, rng_seed, rng_offset = cls.OPERATOR(
File "F:\stable-diffusion-webui\venv\lib\site-packages\torch_ops.py", line 502, in call
return self._op(*args, **kwargs or {})
RuntimeError: CUDA error: invalid configuration argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

[Bug]: Strange blocky artefacts

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

The images produced have strange artefacts with blocks and lines. The prompt does seem to affect the results, and it seems AnimateDiff tries to make it stable.

GIF (AnimateDiff enabled)

00007-2105789079

Just a single image (AnimateDiff disabled)

image

Prompt

4 frames, 8 fps, 0 loop

kermit
Negative prompt: NSFW, Cleavage, Pubic Hair, Nudity, Naked, Au naturel, Watermark, Text, censored, deformed, bad anatomy, disfigured, poorly drawn face, mutated, extra limb, ugly, poorly drawn hands, missing limb, floating limbs, disconnected limbs, disconnected head, malformed hands, long neck, mutated hands and fingers, bad hands, missing fingers, cropped, worst quality, low quality, mutation, poorly drawn, huge calf, bad hands, fused hand, missing hand, disappearing arms, disappearing thigh, disappearing calf, disappearing legs, missing fingers, fused fingers, abnormal eye proportion, Abnormal hands, abnormal legs, abnormal feet, abnormal fingers
Steps: 50, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2105789079, Face restoration: CodeFormer, Size: 512x512, Model hash: 9aba26abdf, Model: deliberate_v2, Version: v1.5.1

Steps to reproduce the problem

  1. Update webui to latest
  2. Update AnimateDiff extension to latest
  3. Download model
  4. Use prompt from What happened?
  5. Enable AnimateDiff
  6. Strange GIF with artefacts

What should have happened?

  1. A stable GIF of Kermit without artefacts

Commit where the problem happens

webui: 68f336bd994bed5442ad95bad6b6ad5564a5409a
extension: bcca007

What browsers do you use to access the UI ?

Mozilla Firefox, Google Chrome

Command Line Arguments

.\webui.bat --listen --enable-insecure-extension-access --api

Console logs

XHRGET
http://192.168.1.116:7860/openpose_editor_index
[HTTP/1.1 404 Not Found 1ms]

gradio_ver:3.23.0 civitai_helper.js:337:13
found active tab: undefined civitai_helper.js:438:21

(Those are emitted when the UI starts up; no logs emitted during GIF generation)

Additional information

Command line logs

PS C:\Users\USERNAME\stable-diffusion-webui> .\webui.bat --listen --enable-insecure-extension-access --api
venv "C:\Users\USERNAME\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Version: v1.5.1
Commit hash: 68f336bd994bed5442ad95bad6b6ad5564a5409a

Launching Web UI with arguments: --listen --enable-insecure-extension-access --api
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Civitai Helper: Get Custom Model Folder
Civitai Helper: Load setting from: C:\Users\USERNAME\stable-diffusion-webui\extensions\Stable-Diffusion-Webui-Civitai-Helper\setting.json
Civitai Helper: No setting file, use default
2023-07-29 10:27:46,303 - ControlNet - INFO - ControlNet v1.1.233
ControlNet preprocessor location: C:\Users\USERNAME\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2023-07-29 10:27:46,492 - ControlNet - INFO - ControlNet v1.1.233
Loading weights [9aba26abdf] from C:\Users\USERNAME\stable-diffusion-webui\models\Stable-diffusion\deliberate_v2.safetensors
Creating model from config: C:\Users\USERNAME\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying attention optimization: Doggettx... done.
Model loaded in 3.6s (load weights from disk: 0.7s, create model: 0.5s, apply weights to model: 0.6s, apply half(): 0.7s, move model to device: 1.1s).
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 14.6s (launcher: 2.3s, import torch: 2.6s, import gradio: 0.9s, setup paths: 0.6s, other imports: 0.6s, opts onchange: 1.3s, load scripts: 1.4s, create ui: 0.5s, gradio launch: 4.2s).
2023-07-29 10:27:54,467 - AnimateDiff - INFO - AnimateDiff process start with video Max frames 4, FPS 8, duration 0.5,  motion module mm_sd_v15.ckpt.
2023-07-29 10:27:54,468 - AnimateDiff - INFO - Loading motion module mm_sd_v15.ckpt from C:\Users\USERNAME\stable-diffusion-webui\extensions\sd-webui-animatediff\model\mm_sd_v15.ckpt
2023-07-29 10:27:57,632 - AnimateDiff - WARNING - Missing keys <All keys matched successfully>
2023-07-29 10:27:57,942 - AnimateDiff - INFO - Hacking GroupNorm32 forward function.
2023-07-29 10:27:57,942 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet input blocks.
2023-07-29 10:27:57,943 - AnimateDiff - INFO - Injecting motion module mm_sd_v15.ckpt into SD1.5 UNet output blocks.
2023-07-29 10:27:57,943 - AnimateDiff - INFO - Injection finished.
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:21<00:00,  2.37it/s]
2023-07-29 10:28:20,974 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet input blocks.████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:20<00:00,  2.41it/s]
2023-07-29 10:28:20,974 - AnimateDiff - INFO - Removing motion module from SD1.5 UNet output blocks.
2023-07-29 10:28:20,974 - AnimateDiff - INFO - Restoring GroupNorm32 forward function.
2023-07-29 10:28:20,974 - AnimateDiff - INFO - Removal finished.
2023-07-29 10:28:20,974 - AnimateDiff - INFO - Merging images into GIF.
2023-07-29 10:28:21,285 - AnimateDiff - INFO - AnimateDiff process end.
Total progress: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:22<00:00,  2.23it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:22<00:00,  2.41it/s]

[Bug]:

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

Just reporting bugs I found, Love your work, keep going!

  1. Character limit bug, if you use a mid size + prompt, itll generate two different images, and they won't move much either

  2. Using the seed of a static generated photo, does not translate to the same seed being used when the extension is enabled. They are very different images, almost like the seed is useless.

3.the muted colors, but I think you're fixing that

thank you! <3

Steps to reproduce the problem

  1. make a decent sized prompt, like with atleast 25 words
  2. try replicating an image in animateDiff with a static generation

What should have happened?

shouldve worked

Commit where the problem happens

webui:
extension:

What browsers do you use to access the UI ?

No response

Command Line Arguments

idk

Console logs

idk

Additional information

No response

[Bug]: too many tokens in negative causes weird behavior

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

<75 tokens, seems WAI
image

If we double the negative prompt then it will start to produce two sets of images
image

Behavior holds at batch 24 (12 one scene, 12 another), even with only slightly over 75 tokens in negative
image

Going down to batch 14 we start to see one half not follow the prompt well
image

This deteriorates further at batch 12
image

SD starts to collapse at batch 10
image

Going down to 73 tokens in negative and we recover expected function
image

Alternatively, switching scheduler to DDIM with 77 tokens in negative seems more resistant to collapse but something is still wrong (noisier, more washed out color than before)
image

Also of note, with 73 tokens in negative batch 15 works fine
image

But go to 77 tokens and it throws an error

*** Error completing request
*** Arguments: ('task(1iahtw4e5tw20iv)', '(masterpiece), (best quality), (ultra-detailed), photorealistic, (best illustration), (an extremely delicate and beautiful), 1girl, solo, upper body, hiryuuchan, brown hair, brown eyes, (one side up), wind, orange kimono, blue sky, detailed scenery, finely detailed iris, <lora:hiryuu_nai_11-24:1:OUTD>', 'easynegativev2, (bad-hands-5:1), (verybadimagenegative:0.9), error, blurry, jpeg artifacts, cropped, worst quality, low quality, normal quality, (worst quality, low quality:1.4), bad anatomy, (extra hand), extra digits, extra fingers, extra limb, extra arm, bad quality', [], 50, 6, False, False, 1, 1, 6.5, 1728598878.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, 0, 0, 0, 0, 0.25, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': <object object at 0x000001448A1D9140>}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'inpaint_global_harmonious', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': <object object at 0x000001448A1D8550>}, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', True, 15, 8, 'mm_sd_v15.ckpt', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001448AC69990>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001448AC6B7F0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001448AC68610>, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nIND_PLUS:1,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0\nIND_PLUS_a:1,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0\nIND_PLUS_b:1,0,0,0,1,0,1,1,0,0,0,0,0,0,0,0,0\nIND_PLUS_c:1,0,0,0,1,1,0,1,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nINS_MIDD:1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0\nINS_MIDD_a:1,0,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0\nINS_MIDD_b:1,1,0,1,1,1,1,1,1,1,1,1,0,0,0,0,0\nINS_MIDD_c:1,1,1,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nINS_MIDD_d:1,1,1,1,0,1,1,1,1,1,1,1,0,0,0,0,0\nINS_MIDD_e:1,1,1,1,1,0,1,1,1,1,1,1,0,0,0,0,0\nINS_MIDD_de:1,1,1,1,0,0,1,1,1,1,1,1,0,0,0,0,0\nINS_MIDD_f:1,1,1,1,1,1,0,1,1,1,1,1,0,0,0,0,0\nINS_MIDD_g:1,1,1,1,1,1,1,0,1,1,1,1,0,0,0,0,0\nINS_MIDD_h:1,1,1,1,1,1,1,1,0,1,1,1,0,0,0,0,0\nINS_MIDD_i:1,1,1,1,1,1,1,1,1,0,1,1,0,0,0,0,0\nINS_MIDD_j:1,1,1,1,1,1,1,1,1,1,0,1,0,0,0,0,0\nINS_MIDD_k:1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTD_1:1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0\nOUTD_2:1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0\nOUTD_3:1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0\nOUTD_4:1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0\nOUTD_12:1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0\nOUTD_23:1,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0\nOUTD_34:1,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0\nOUTD_13:1,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0\nOUTD_14:1,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0\nOUTD_24:1,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0\nOUTD_234:1,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0\nOUTD_134:1,0,0,0,0,0,0,0,1,0,1,1,0,0,0,0,0\nOUTD_124:1,0,0,0,0,0,0,0,1,1,0,1,0,0,0,0,0\nOUTD_123:1,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nOUTALL_a:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,0\nOUTALL_b:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,0,1\nOUTALL_c:1,0,0,0,0,0,0,0,1,1,1,1,1,1,0,1,1\nOUTALL_d:1,0,0,0,0,0,0,0,1,1,1,1,1,0,1,1,1\nOUTALL_e:1,0,0,0,0,0,0,0,1,1,1,1,0,1,1,1,1\nMIDD_OUTS:1,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS_OUTD:1,1,1,1,0,0,0,1,1,1,1,1,0,0,0,0,0\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5\nLNONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nLALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nLINS:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nLIND:1,0,0,0,0,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0\nLINALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0\nLMIDD:1,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0\nLOUTD:1,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0,0\nLOUTS:1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1\nLOUTALL:1,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, '\n            <h3><strong>Combinations</strong></h3>\n            Choose a number of terms from a list, in this case we choose two artists\n            <code>{2$$artist1|artist2|artist3}</code>\n            If $$ is not provided, then 1$$ is assumed.\n            <br>\n            A range can be provided:\n            <code>{1-3$$artist1|artist2|artist3}</code>\n            In this case, a random number of artists between 1 and 3 is chosen.\n            <br/><br/>\n\n            <h3><strong>Wildcards</strong></h3>\n            <p>Available wildcards</p>\n            <ul>\n        <li>__angle__</li><li>__background__</li><li>__bra_colors__</li><li>__bra_patterns__</li><li>__bra_type__</li><li>__clothing__</li><li>__footwear__</li><li>__limbwear__</li><li>__location__</li><li>__underwear__</li><li>__view__</li></ul>\n            <br/>\n            <code>WILDCARD_DIR: scripts/wildcards</code><br/>\n            <small>You can add more wildcards by creating a text file with one term per line and name is mywildcards.txt. Place it in scripts/wildcards. <code>__mywildcards__</code> will then become available.</small>\n        ', False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\modules\call_queue.py", line 55, in f
        res = list(func(*args, **kwargs))
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\modules\call_queue.py", line 35, in f
        res = func(*args, **kwargs)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\modules\txt2img.py", line 57, in txt2img
        processed = processing.process_images(p)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\modules\processing.py", line 620, in process_images
        res = process_images_inner(p)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\modules\processing.py", line 739, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\modules\processing.py", line 992, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 439, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 278, in launch_sampling
        return func()
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 439, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 626, in sample_dpmpp_2m_sde
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 177, in forward
        x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
        return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 776, in forward
        h = module(h, emb, context)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 21, in mm_tes_forward
        x = layer(x, context)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Novel AI Diffusion\Stable Diffusion git\stable-diffusion-webui\extensions\sd-webui-animatediff\motion_module.py", line 76, in forward
        hidden_states = torch.stack([input_cond, input_uncond], dim=0)
    RuntimeError: stack expects each tensor to be equal size, but got [8, 320, 64, 64] at entry 0 and [7, 320, 64, 64] at entry 1

Steps to reproduce the problem

See attached screenshots

What should have happened?

It should apply consistent inputs to all frames

Commit where the problem happens

webui:
version: v1.4.1  •  python: 3.10.6  •  torch: 2.0.1+cu118  •  xformers: N/A  •  gradio: 3.32.0  •  checkpoint: e9a14f558d

extension:
sd-webui-animatediff https://github.com/continue-revolution/sd-webui-animatediff master [e8c88a4]

What browsers do you use to access the UI ?

No response

Command Line Arguments

--opt-sdp-attention --no-half-vae

Console logs

See above

Additional information

No response

[Bug]: Possibly unable to find animation models? (DirectML WebUI)

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

Whenever I try to generate anything, produces an error message, then creates 16 totally different images and puts them together in a 8fps slideshow. It does not animate it. I believe that is because it cannot find the motion modules.

Steps to reproduce the problem

  1. Launch WebUI
  2. Enable AnimateDiff
  3. Make sure the modules are in the right path (D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\extensions\sd-webui-animatediff\model\mm_sd_v15.ckpt)
  4. Generate animation
  5. Failure

What should have happened?

Even though it did not work this time, it should have animated it, and not produced the error message.

Commit where the problem happens

webui: commit 089a002
extension: 287b30af

What browsers do you use to access the UI ?

Mozilla Firefox, Google Chrome

Command Line Arguments

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--medvram --theme dark
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128
set OPTIMIZED_TURBO=true

call webui.bat

Console logs

2023-07-24 13:13:32,058 - AnimateDiff - INFO - Loading motion module mm_sd_v15.ckpt from D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\extensions\sd-webui-animatediff\model\mm_sd_v15.ckpt
*** Error running before_process: D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\extensions\sd-webui-animatediff\scripts\animatediff.py
    Traceback (most recent call last):
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\modules\scripts.py", line 466, in before_process
        script.before_process(p, *script_args)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\extensions\sd-webui-animatediff\scripts\animatediff.py", line 128, in before_process
        self.inject_motion_modules(p, model)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\extensions\sd-webui-animatediff\scripts\animatediff.py", line 79, in inject_motion_modules
        mm_state_dict = torch.load(model_path, map_location=device)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\modules\safe.py", line 108, in load
        return load_with_extra(filename, *args, extra_handler=global_extra_handler, **kwargs)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\modules\safe.py", line 156, in load_with_extra
        return unsafe_torch_load(filename, *args, **kwargs)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 809, in load
        return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1172, in _load
        result = unpickler.load()
      File "C:\Users\fungus\AppData\Local\Programs\Python\Python310\lib\pickle.py", line 1213, in load
        dispatch[key[0]](self)
      File "C:\Users\fungus\AppData\Local\Programs\Python\Python310\lib\pickle.py", line 1254, in load_binpersid
        self.append(self.persistent_load(pid))
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1142, in persistent_load
        typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1116, in load_tensor
        wrap_storage=restore_location(storage, location),
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 1086, in restore_location
        return default_restore_location(storage, str(map_location))
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\venv\lib\site-packages\torch\serialization.py", line 220, in default_restore_location
        raise RuntimeError("don't know how to restore data location of "
    RuntimeError: don't know how to restore data location of torch.storage.UntypedStorage (tagged with privateuseone:0)

---
  0%|                                                                                           | 0/20 [00:01<?, ?it/s]
*** Error completing request
*** Arguments: ('task(5ngtapseuwup5ra)', 'by zackary911, (salazzle, pokemon, anthro), scalie, scales, [female|ambiguous gender], (girly:1.3), (muscular:0.3), [musclegut, slightly chubby, overweight:8], flat chest, moobs, blowjob', 'hair,deformed,ugly,blurry,bad anatomy,disfigured,extra limb,deformed hands,deformed feet,face out of frame,multiple tails,((bad anatomy)),disfigured,deformed,malformed,mutant,monstrous,ugly,gross,disgusting,blurry,poorly drawn,extra limbs,extra fingers,missing limbs,amputee,malformed hands,multi balls,multi penis,floating penis,dialogue,text,deformed face,hair,cub,child, (((3 toes, 5 toes)))', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 256, 256, False, 0.75, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, 0, False, 0, False, False, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', True, 0, 8, 4, 'mm_sd_v15.ckpt', False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
    Traceback (most recent call last):
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\modules\call_queue.py", line 55, in f
        res = list(func(*args, **kwargs))
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\modules\call_queue.py", line 35, in f
        res = func(*args, **kwargs)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\modules\txt2img.py", line 64, in txt2img
        processed = processing.process_images(p)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\modules\processing.py", line 623, in process_images
        res = process_images_inner(p)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\modules\processing.py", line 742, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\modules\processing.py", line 995, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 439, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 278, in launch_sampling
        return func()
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 439, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 177, in forward
        x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
        result = forward_call(*args, **kwargs)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\modules\sd_unet.py", line 91, in UNetModel_forward
        return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
        h = module(h, emb, context)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\extensions\sd-webui-animatediff\scripts\animatediff.py", line 25, in mm_tes_forward
        x = layer(x)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\AI shit\A1111 Web UI Autoinstaller\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 109, in forward
        assert x.shape[1] == self.channels
    AssertionError

---

Additional information

The second error, the AssertionError, happens after a second failed generation.

[Feature]: animation off body can't move.

Expected behavior

Hello and this animation looks good but something was missing! The body can't move properly, I once tried to make animation out of it, but the body in this picture animation doesn't want to moving and that would be nice if you could fix something, then the bodies can move freely. Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.