Giter VIP home page Giter VIP logo

animatediff-cli's Introduction

animatediff

pre-commit.ci status

animatediff refactor, because I can. with significantly lower VRAM usage.

Also, infinite generation length support! yay!

LoRA loading is ABSOLUTELY NOT IMPLEMENTED YET!

PRs welcome! ๐Ÿ˜†๐Ÿ˜…

This can theoretically run on CPU, but it's not recommended. Should work fine on a GPU, nVidia or otherwise, but I haven't tested on non-CUDA hardware. Uses PyTorch 2.0 Scaled-Dot-Product Attention (aka builtin xformers) by default, but you can pass --xformers to force using xformers if you really want.

How to use

I should write some more detailed steps, but here's the gist of it:

git clone https://github.com/neggles/animatediff-cli
cd animatediff-cli
python3.10 -m venv .venv
source .venv/bin/activate
# install Torch. Use whatever your favourite torch version >= 2.0.0 is, but, good luck on non-nVidia...
python -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
# install the rest of all the things (probably! I may have missed some deps.)
python -m pip install -e '.[dev]'
# you should now be able to
animatediff --help
# There's a nice pretty help screen with a bunch of info that'll print here.

From here you'll need to put whatever checkpoint you want to use into data/models/sd, copy one of the prompt configs in config/prompts, edit it with your choices of prompt and model (model paths in prompt .json files are relative to data/, e.g. models/sd/vanilla.safetensors), and off you go.

Then it's something like (for an 8GB card):

animatediff generate -c 'config/prompts/waifu.json' -W 576 -H 576 -L 128 -C 16

You may have to drop -C down to 8 on cards with less than 8GB VRAM, and you can raise it to 20-24 on cards with more. 24 is max.

N.B. generating 128 frames is slow...

RiFE!

I have added experimental support for rife-ncnn-vulkan using the animatediff rife interpolate command. It has fairly self-explanatory help, and it has been tested on Linux, but I've no idea if it'll work on Windows.

Either way, you'll need ffmpeg installed on your system and present in PATH, and you'll need to download the rife-ncnn-vulkan release for your OS of choice from the GitHub repo (above). Unzip it, and place the extracted folder at data/rife/. You should have a data/rife/rife-ncnn-vulkan executable, or data\rife\rife-ncnn-vulkan.exe on Windows.

You'll also need to reinstall the repo/package with:

python -m pip install -e '.[rife]'

or just install ffmpeg-python manually yourself.

Default is to multiply each frame by 8, turning an 8fps animation into a 64fps one, then encode that to a 60fps WebM. (If you pick GIF mode, it'll be 50fps, because GIFs are cursed and encode frame durations as 1/100ths of a second).

Seems to work pretty well...

TODO:

In no particular order:

  • Infinite generation length support
  • RIFE support for motion interpolation (rife-ncnn-vulkan isn't the greatest implementation)
  • Export RIFE interpolated frames to a video file (webm, mp4, animated webp, hevc mp4, gif, etc.)
  • Generate infinite length animations on a 6-8GB card (at 512x512 with 8-frame context, but hey it'll do)
  • Torch SDP Attention (makes xformers optional)
  • Support for clip_skip in prompt config
  • Experimental support for torch.compile() (upstream Diffusers bugs slow this down a little but it's still zippy)
  • Batch your generations with --repeat! (e.g. --repeat 10 will repeat all your prompts 10 times)
  • Call the animatediff.cli.generate() function from another Python program without reloading the model every time
  • Drag remaining old Diffusers code up to latest (mostly)
  • Add a webUI (maybe, there are people wrapping this already so maybe not?)
  • img2img support (start from an existing image and continue)
  • Stop using custom modules where possible (should be able to use Diffusers for almost all of it)
  • Automatic generate-then-interpolate-with-RIFE mode

Credits:

see guoyww/AnimateDiff (very little of this is my work)

n.b. the copyright notice in COPYING is missing the original authors' names, solely because the original repo (as of this writing) has no name attached to the license. I have, however, used the same license they did (Apache 2.0).

animatediff-cli's People

Contributors

neggles avatar pre-commit-ci[bot] avatar skquark avatar threeal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

animatediff-cli's Issues

[Enhancement][Bugfix] CPU inferencing does not work

Doing animatediff generate --device cpu does not work. Generating with cuda does, and the program works fine otherwise, but cpu support does not appear to be functional.

โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚/animatediff-cli/src/animatediff/cli.py:154 in     โ”‚
โ”‚ generate                                                                                   โ”‚
โ”‚                                                                                            โ”‚
โ”‚   151 โ”‚   set_diffusers_verbosity_error()                                                  โ”‚
โ”‚   152 โ”‚                                                                                    โ”‚
โ”‚   153 โ”‚   device = torch.device(device)                                                    โ”‚
โ”‚ โฑ 154 โ”‚   device_info = torch.cuda.get_device_properties(device)                           โ”‚
โ”‚   155 โ”‚                                                                                    โ”‚
โ”‚   156 โ”‚   logger.info(device_info_str(device_info))                                        โ”‚
โ”‚   157 โ”‚   has_bf16 = torch.cuda.is_bf16_supported()                                        โ”‚
โ”‚                                                                                            โ”‚
โ”‚ /animatediff-cli/venv/lib/python3.10/site-packages โ”‚
โ”‚ /torch/cuda/__init__.py:396 in get_device_properties                                       โ”‚
โ”‚                                                                                            โ”‚
โ”‚    393 โ”‚   โ”‚   _CudaDeviceProperties: the properties of the device                         โ”‚
โ”‚    394 โ”‚   """                                                                             โ”‚
โ”‚    395 โ”‚   _lazy_init()  # will define _get_device_properties                              โ”‚
โ”‚ โฑ  396 โ”‚   device = _get_device_index(device, optional=True)                               โ”‚
โ”‚    397 โ”‚   if device < 0 or device >= device_count():                                      โ”‚
โ”‚    398 โ”‚   โ”‚   raise AssertionError("Invalid device id")                                   โ”‚
โ”‚    399 โ”‚   return _get_device_properties(device)  # type: ignore[name-defined]             โ”‚
โ”‚                                                                                            โ”‚
โ”‚ /animatediff-cli/venv/lib/python3.10/site-packages โ”‚
โ”‚ /torch/cuda/_utils.py:32 in _get_device_index                                              โ”‚
โ”‚                                                                                            โ”‚
โ”‚   29 โ”‚   โ”‚   โ”‚   if device.type not in ['cuda', 'cpu']:                                    โ”‚
โ”‚   30 โ”‚   โ”‚   โ”‚   โ”‚   raise ValueError('Expected a cuda or cpu device, but got: {}'.format( โ”‚
โ”‚   31 โ”‚   โ”‚   elif device.type != 'cuda':                                                   โ”‚
โ”‚ โฑ 32 โ”‚   โ”‚   โ”‚   raise ValueError('Expected a cuda device, but got: {}'.format(device))    โ”‚
โ”‚   33 โ”‚   if not torch.jit.is_scripting():                                                  โ”‚
โ”‚   34 โ”‚   โ”‚   if isinstance(device, torch.cuda.device):                                     โ”‚
โ”‚   35 โ”‚   โ”‚   โ”‚   return device.idx                                                         โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
ValueError: Expected a cuda device, but got: cpu

If you follow the error further down the line, it turns into an issue with fp16 not being possible on cpu. It might be extremely slow, but proper cpu support may be worth adding.

I fixed Cpu Mode

Hey when u enable Cpu Mode it still fails at cuda check at sertant parts I fixed it but 2 files one is src/animatediff/generate.py and the other is src/animatediff/pipeline/animation.py all u gotta do is change the (cuda) in brackets to Cpu in both those files u don't need to mess with the one by igpu and the torch.cuda.empty lines just the ones in brackets

How to use rife?

Hi, can you show me how to use rife in the Readme in detail? I don't know how to use it .

The generated images and GIFs are pure black

Using scheduler "k_dpmpp_2m_sde" (DPMSolverMultistepScheduler) generate.py:68 INFO Loading weights from E:\animatediff-cli\data\models\sd\majicmixRealistic_v7.safetensors generate.py:73 08:13:58 INFO Merging weights into UNet... generate.py:90 08:13:59 INFO Creating AnimationPipeline... generate.py:110 INFO No TI embeddings found generate.py:131 INFO Sending pipeline to device "cuda" pipeline.py:22 INFO Selected data types: unet_dtype=torch.float16, tenc_dtype=torch.float16, device.py:90 vae_dtype=torch.float32 INFO Using channels_last memory format for UNet and VAE device.py:109 08:14:02 INFO Saving prompt config to output directory cli.py:290 INFO Initialization complete! cli.py:299 INFO Generating 1 animations from 1 prompts cli.py:300 INFO Running generation 1 of 1 (prompt 1) cli.py:309 INFO Generation seed: 10925512164 cli.py:319 100% โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 20/20 [ 0:11:50 < 0:00:00 , ? it/s ] 08:26:00 INFO Generation complete, saving... generate.py:175 INFO Saved sample to generate.py:188 output\2024-03-18T08-13-43-girl-majicmixrealistic_v7\00_10925561214_1girl_solo_best-quali ty_masterpiece_looking-at-viewer_purple-hair.gif Saving frames to 00-10925564 100% โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 8/8 [ 0:00:00 < 0:00:00 , ? it/s ] INFO Generation complete! cli.py:345 INFO Done, exiting... The generated images and GIFs are pure black

Error caught was: No module named 'triton'

(.venv) PS D:\python\animatediff-cli> animatediff generate
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.1.2+cu121 with CUDA 1201 (you have 2.1.2+cu118)
Python 3.10.11 (you have 3.10.7)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
Traceback (most recent call last):
File "C:\Users\ro612\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\ro612\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "D:\python\animatediff-cli.venv\Scripts\animatediff.exe_main
.py", line 4, in
File "D:\python\animatediff-cli\src\animatediff\cli.py", line 12, in
from animatediff.generate import create_pipeline, run_inference
File "D:\python\animatediff-cli\src\animatediff\generate.py", line 12, in
from animatediff.models.clip import CLIPSkipTextModel
File "D:\python\animatediff-cli\src\animatediff\models\clip.py", line 7, in
from transformers.models.clip.modeling_clip import (
ImportError: cannot import name '_expand_mask' from 'transformers.models.clip.modeling_clip' (D:\python\animatediff-cli.venv\lib\site-packages\transformers\models\clip\modeling_clip.py)
(.venv) PS D:\python\animatediff-cli> pip install triton
ERROR: Could not find a version that satisfies the requirement triton (from versions: none)
ERROR: No matching distribution found for triton

Performance, in comparison with original AnimateDiff

i'm using same settings (such as number of steps, frames, resolution, scheduler) and both running on cuda.

i've tried 2 different gpus, gtx 1080 and tesla p40

and always i'm getting about 2-3 times slower result with your code (although animation also looks better)

did you noticed the same difference in performance? if not, mb some ideas what i could be overlooking in the settings

also in your code amount of vram used is lower - so i suspect that some computation which in original AnimateDiff done on gpu here are done on cpu (i quickly looked into the code of the pipeline and i saw that for long video sequential_mode is used which works on cpu - but i've been testing on 8-16 frames animations, so that shouldn't be the case)

CAN'T LOAD MODEL WEIGHTS

My terminal crashes after loading 50% of the model weights can u please make smaller resolution I think it's the 512x512 limit but I need 320x320 but I'm trying to load absolutereality 1.6 model and plus transformers 4.33.0 is required to run. Animatediff cli I. On linux cpu BTW it works in comfyui but only 320x320 resolution,I even put -C to 1 and it still crashed! Maybe the model is corrupt idk I was hoping to use this cause it suppose to use less ram.

This uses up all my ram I thought this was supposed to use less please optimize this or add command line args like split attention and force-fp16 --cpu etc

Error!!! animatediff generate -c 'config/prompts/01-ToonYou.json' -W 576 -H 576 -L 128 -C 16

I have configured the environment as required and downloaded the relevant models. When I run the above command, an error message is reported. The error message is as follows. How should I solve it:

animatediff-cli/src/animatediff/cli.py:252 in generate                                                                 โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   249 โ”‚                                                                                          โ”‚
โ”‚   250 โ”‚   config_path = config_path.absolute()                                                   โ”‚
โ”‚   251 โ”‚   logger.info(f"Using generation config: {relative_path(config_path)}")                  โ”‚
โ”‚ โฑ 252 โ”‚   model_config: ModelConfig = get_model_config(config_path)                              โ”‚
โ”‚   253 โ”‚   infer_config: InferenceConfig = get_infer_config()                                     โ”‚
โ”‚   254 โ”‚                                                                                          โ”‚
โ”‚   255 โ”‚   # set sane defaults for context, overlap, and stride if not supplied                   โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ animatediff-cli/src/an โ”‚
โ”‚ imatediff/settings.py:126 in get_model_config                                                    โ”‚
โ”‚                                                                                                  โ”‚
โ”‚   123                                                                                            โ”‚
โ”‚   124 @lru_cache(maxsize=2)                                                                      โ”‚
โ”‚   125 def get_model_config(config_path: Path) -> ModelConfig:                                    โ”‚
โ”‚ โฑ 126 โ”‚   settings = ModelConfig(json_config_path=config_path)                                   โ”‚
โ”‚   127 โ”‚   return settings                                                                        โ”‚
โ”‚   128                                                                                            โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ in pydantic.env_settings.BaseSettings.__init__:39                                                โ”‚
โ”‚                                                                                                  โ”‚
โ”‚ in pydantic.main.BaseModel.__init__:342                                                          โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
ValidationError: 1 validation error for ModelConfig
scheduler
  value is not a valid enumeration member; permitted: 'ddim', 'pndm', 'heun', 'unipc', 'euler', 'euler_a', 'lms', 'k_lms', 'dpm_2', 'k_dpm_2', 'dpm_2_a', 'k_dpm_2_a', 'dpmpp_2m', 'k_dpmpp_2m', 'dpmpp_sde', 'k_dpmpp_sde', 'dpmpp_2m_sde', 'k_dpmpp_2m_sde' (type=type_error.enum; enum_values=[<DiffusionScheduler.ddim: 'ddim'>, 
<DiffusionScheduler.pndm: 'pndm'>, <DiffusionScheduler.heun: 'heun'>, <DiffusionScheduler.unipc: 'unipc'>, <DiffusionScheduler.euler: 'euler'>, <DiffusionScheduler.euler_a: 'euler_a'>, <DiffusionScheduler.lms: 'lms'>, <DiffusionScheduler.k_lms: 'k_lms'>, <DiffusionScheduler.dpm_2: 'dpm_2'>, <DiffusionScheduler.k_dpm_2: 'k_dpm_2'>, 
<DiffusionScheduler.dpm_2_a: 'dpm_2_a'>, <DiffusionScheduler.k_dpm_2_a: 'k_dpm_2_a'>, <DiffusionScheduler.dpmpp_2m: 'dpmpp_2m'>, <DiffusionScheduler.k_dpmpp_2m: 'k_dpmpp_2m'>, <DiffusionScheduler.dpmpp_sde: 'dpmpp_sde'>, <DiffusionScheduler.k_dpmpp_sde: 'k_dpmpp_sde'>, <DiffusionScheduler.dpmpp_2m_sde: 'dpmpp_2m_sde'>, 
<DiffusionScheduler.k_dpmpp_2m_sde: 'k_dpmpp_2m_sde'>])


RIFE questions

Is there an extra command line option or something to kick rife off? I've installed as per the instructions but it doesn't seem to be doing anything. Also how to you specify movie output instead of gif?

pip._vendor.resolvelib.resolvers.ResolutionTooDeep: 200000

When I install the dependency by python -m pip install -e '.[dev]', this error occured, how to solve it?

Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done ERROR: Exception: Traceback (most recent call last): File "/opt/anaconda3/envs/ldm/lib/python3.8/site-packages/pip/_internal/cli/base_command.py", line 180, in exc_logging_wrapper status = run_func(*args) File "/opt/anaconda3/envs/ldm/lib/python3.8/site-packages/pip/_internal/cli/req_command.py", line 248, in wrapper return func(self, options, args) File "/opt/anaconda3/envs/ldm/lib/python3.8/site-packages/pip/_internal/commands/install.py", line 377, in run requirement_set = resolver.resolve( File "/opt/anaconda3/envs/ldm/lib/python3.8/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 92, in resolve result = self._result = resolver.resolve( File "/opt/anaconda3/envs/ldm/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 546, in resolve state = resolution.resolve(requirements, max_rounds=max_rounds) File "/opt/anaconda3/envs/ldm/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 457, in resolve raise ResolutionTooDeep(max_rounds) pip._vendor.resolvelib.resolvers.ResolutionTooDeep: 200000

linux problem

(.venv) root@autodl-container-a1c3118008-79be1975:~/autodl-tmp/cli/animatediff-cli# animatediff generate -c 'config/prompts/A1.json' -W 576 -H 576 -L 128 -C 16
02:41:48 INFO Using generation config: config/prompts/A1.json cli.py:247
INFO Using base model: runwayml/stable-diffusion-v1-5 cli.py:258
INFO Will save outputs to ./output/2023-08-11T02-41-48-a1-realisticvisionv40_v40vae cli.py:266
INFO Checking motion module... generate.py:39
INFO Loading tokenizer... generate.py:51
INFO Loading text encoder... generate.py:53
02:41:50 INFO Loading VAE... generate.py:55
INFO Loading UNet... generate.py:57
02:42:03 INFO Loaded 417.1376M-parameter motion module unet.py:559
INFO Using scheduler "euler" (EulerDiscreteScheduler) generate.py:69
INFO Loading weights from /root/autodl-tmp/models/ckpt/realisticVisionV40_v40VAE.safetensors generate.py:74
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ /root/autodl-tmp/cli/animatediff-cli/src/animatediff/cli.py:273 in generate โ”‚
โ”‚ โ”‚
โ”‚ 270 โ”‚ global last_model_path โ”‚
โ”‚ 271 โ”‚ if pipeline is None or last_model_path != base_model_path.resolve(): โ”‚
โ”‚ 272 โ”‚ โ”‚ โ”‚
โ”‚ โฑ 273 โ”‚ โ”‚ pipeline = create_pipeline( โ”‚
โ”‚ 274 โ”‚ โ”‚ โ”‚ base_model=base_model_path, โ”‚
โ”‚ 275 โ”‚ โ”‚ โ”‚ model_config=model_config, โ”‚
โ”‚ 276 โ”‚ โ”‚ โ”‚ infer_config=infer_config, โ”‚
โ”‚ โ”‚
โ”‚ /root/autodl-tmp/cli/animatediff-cli/src/animatediff/generate.py:77 in create_pipeline โ”‚
โ”‚ โ”‚
โ”‚ 74 โ”‚ โ”‚ logger.info(f"Loading weights from {model_path}") โ”‚
โ”‚ 75 โ”‚ โ”‚ if model_path.is_file(): โ”‚
โ”‚ 76 โ”‚ โ”‚ โ”‚ logger.debug("Loading from single checkpoint file") โ”‚
โ”‚ โฑ 77 โ”‚ โ”‚ โ”‚ unet_state_dict, tenc_state_dict, vae_state_dict = get_checkpoint_weights(mo โ”‚
โ”‚ 78 โ”‚ โ”‚ elif model_path.is_dir(): โ”‚
โ”‚ 79 โ”‚ โ”‚ โ”‚ logger.debug("Loading from Diffusers model directory") โ”‚
โ”‚ 80 โ”‚ โ”‚ โ”‚ temp_pipeline = StableDiffusionPipeline.from_pretrained(model_path) โ”‚
โ”‚ โ”‚
โ”‚ /root/autodl-tmp/cli/animatediff-cli/src/animatediff/utils/model.py:73 in get_checkpoint_weights โ”‚
โ”‚ โ”‚
โ”‚ 70 โ”‚
โ”‚ 71 def get_checkpoint_weights(checkpoint: Path): โ”‚
โ”‚ 72 โ”‚ temp_pipeline: StableDiffusionPipeline โ”‚
โ”‚ โฑ 73 โ”‚ temp_pipeline, _ = checkpoint_to_pipeline(checkpoint, save=False) โ”‚
โ”‚ 74 โ”‚ unet_state_dict = temp_pipeline.unet.state_dict() โ”‚
โ”‚ 75 โ”‚ tenc_state_dict = temp_pipeline.text_encoder.state_dict() โ”‚
โ”‚ 76 โ”‚ vae_state_dict = temp_pipeline.vae.state_dict() โ”‚
โ”‚ โ”‚
โ”‚ /root/autodl-tmp/cli/animatediff-cli/src/animatediff/utils/model.py:54 in checkpoint_to_pipeline โ”‚
โ”‚ โ”‚
โ”‚ 51 โ”‚ target_dir: Optional[Path] = None, โ”‚
โ”‚ 52 โ”‚ save: bool = True, โ”‚
โ”‚ 53 ) -> StableDiffusionPipeline: โ”‚
โ”‚ โฑ 54 โ”‚ logger.debug(f"Converting checkpoint {path_from_cwd(checkpoint)}") โ”‚
โ”‚ 55 โ”‚ if target_dir is None: โ”‚
โ”‚ 56 โ”‚ โ”‚ target_dir = pipeline_dir.joinpath(checkpoint.stem) โ”‚
โ”‚ 57 โ”‚
โ”‚ โ”‚
โ”‚ /root/autodl-tmp/cli/animatediff-cli/src/animatediff/utils/util.py:44 in path_from_cwd โ”‚
โ”‚ โ”‚
โ”‚ 41 โ”‚
โ”‚ 42 def path_from_cwd(path: PathLike) -> str: โ”‚
โ”‚ 43 โ”‚ path = Path(path) โ”‚
โ”‚ โฑ 44 โ”‚ return str(path.absolute().relative_to(Path.cwd())) โ”‚
โ”‚ 45 โ”‚
โ”‚ โ”‚
โ”‚ /root/miniconda3/lib/python3.10/pathlib.py:818 in relative_to โ”‚
โ”‚ โ”‚
โ”‚ 815 โ”‚ โ”‚ cf = self._flavour.casefold_parts โ”‚
โ”‚ 816 โ”‚ โ”‚ if (root or drv) if n == 0 else cf(abs_parts[:n]) != cf(to_abs_parts): โ”‚
โ”‚ 817 โ”‚ โ”‚ โ”‚ formatted = self._format_parsed_parts(to_drv, to_root, to_parts) โ”‚
โ”‚ โฑ 818 โ”‚ โ”‚ โ”‚ raise ValueError("{!r} is not in the subpath of {!r}" โ”‚
โ”‚ 819 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ " OR one path is relative and the other is absolute." โ”‚
โ”‚ 820 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ .format(str(self), str(formatted))) โ”‚
โ”‚ 821 โ”‚ โ”‚ return self._from_parsed_parts('', root if n == 1 else '', โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
ValueError: '/root/autodl-tmp/models/ckpt/realisticVisionV40_v40VAE.safetensors' is not in the subpath of '/root/autodl-tmp/cli/animatediff-cli' OR one path is
relative and the other is absolute.

[Feature Request] img2img support for animatediff-cli

Would be great to transform an already existing image and animate it instead of first generating using prompts and then animating.

Maybe also batch support so it can take multiple images from a folder and then transform all of them into gifs automatically without manual prompting.

ImportError: cannot import name 'maybe_allow_in_graph' from 'diffusers.utils'

I got this error when I tried to run it in Colab. Not sure how to resolve it.

Traceback (most recent call last): File "/usr/local/bin/animatediff", line 5, in <module> from animatediff.cli import cli File "/content/animatediff-cli/src/animatediff/cli.py", line 12, in <module> from animatediff.generate import create_pipeline, run_inference File "/content/animatediff-cli/src/animatediff/generate.py", line 13, in <module> from animatediff.models.unet import UNet3DConditionModel File "/content/animatediff-cli/src/animatediff/models/unet.py", line 18, in <module> from .unet_blocks import ( File "/content/animatediff-cli/src/animatediff/models/unet_blocks.py", line 9, in <module> from animatediff.models.attention import Transformer3DModel File "/content/animatediff-cli/src/animatediff/models/attention.py", line 10, in <module> from diffusers.utils import BaseOutput, maybe_allow_in_graph ImportError: cannot import name 'maybe_allow_in_graph' from 'diffusers.utils' (/usr/local/lib/python3.10/dist-packages/diffusers/utils/__init__.py)

[Test] Clip_skip test!

I tested the newly added Clip_skip this time. For anime checkpoints, definitely 2 will generate cleaner and more natural images than 1.
Except for Clip_skip, Seed and Prompt are identical.

a.mp4
b.mp4
c.mp4
d.mp4

[Feature Request] Batch generation with one prompt + random seed!

Thanks for the great work! Right now I have to run the generate every time I want to try a different seed from one prompt. It would be cool to be able to specify a number of videos that can be generated in sequence when a random seed[-1] is entered at one prompt! Like this Batch count = 10

Did I use the embeddings correctly๏ผŸ

image
Thank you for your project, it's very surprising,But I seem to have encountered a problem,It seems like I just loaded them, but I didn't use them. Is that the case? If so, how should I use them.

Anyway to cache the loading of stable diffusion between runs?

Hi @neggles,

Glad to see active development into AnimateDiff. You've got some cool ideas going forward. I'm also working on an AnimateDiff repo focusing on the UI side. I think it would be good if I can just call your CLI for the UI to consolidate some development efforts however I'm wondering if there's a way to cache the loading of stable diffusion as that would save about a min between each generation.

Crash on loading model

Everytime I get to the start of loading models it stops at 50% and crashes termux idk y this happens I. can't get pass this part. It works fine on my laptop which has 16gb ram but my phone has 12 but only 10602 gb is read.

Can we use mm_sd_v15_v2?

I managed to get it setup and running on Colab. I did change the motion-module to mm_sd_v15_v2.ckpt.
But once I try to execute the ToonYou script, the process terminates at "Using generation config".
This does not happen when I am using mm_sd_v15.ckpt.

IP Adapter Support

Hello, this animatediff implementation is great. Does it also support IP Adapter?

Absolute paths aren't working

In the prompt Json I have:
"path": "c:/StableDiffusion/stable-diffusion-webui/models/Stable-diffusion/realisticVisionV50_v40VAE.safetensors",
After running I get:

ValueError:
'c:\\StableDiffusion\\stable-diffusion-webui\\models\\Stable-diffusion\\realisticVisionV50_v40VAE.safetensors' is not in
the subpath of 'C:\\sd\\animatediff-cli' OR one path is relative and the other is absolute.

I can't for the life of me figure out how to correctly specify an absolute path to a model. It used to work in the previous versions.

Another note: It would be great if in this case forward slash could be treated like a part of the path, and not as an escape symbol. Copying paths in Windows gives you forward slashes and you have to change them to backslashes all the time.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.