Giter VIP home page Giter VIP logo

Comments (6)

feiyangsuo avatar feiyangsuo commented on July 4, 2024 1

I believe they are talking about the default gradio setup. It's capped at 128 frames.
I'm assuming editing the .py file for the gradio you can bypass?

Yes , you are right. In the gradio setup, the max video length is 128. You can edit the code for a longer generation.

Is it fixed at 128 with pose2vid too? I seem to get errors when I increase it past it Edit: I was setting it longer than the pose video. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [498,0,0], thread: [95,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.

Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/content/Moore-AnimateAnyone/scripts/pose2vid.py", line 167, in main() File "/content/Moore-AnimateAnyone/scripts/pose2vid.py", line 146, in main video = pipe( File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/content/Moore-AnimateAnyone/src/pipelines/pipeline_pose2vid_long.py", line 525, in call pred = self.denoising_unet( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/content/Moore-AnimateAnyone/src/models/unet_3d.py", line 493, in forward sample, res_samples = downsample_block( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/content/Moore-AnimateAnyone/src/models/unet_3d_blocks.py", line 442, in forward hidden_states = attn( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/content/Moore-AnimateAnyone/src/models/transformer_3d.py", line 140, in forward hidden_states = block( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/content/Moore-AnimateAnyone/src/models/mutual_self_attention.py", line 180, in hacked_basic_transformer_inner_forward norm_hidden_states[_uc_mask], RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

replacing every args.L with min(len(pose_images), args.L) in pose2vid.py could fix this problem

from moore-animateanyone.

lixunsong avatar lixunsong commented on July 4, 2024

Hello @michaeltran33 , what error message you get when exceeding length with 128? We do not limit the generation length, as long as the pose video has enough frames.

from moore-animateanyone.

inferno46n2 avatar inferno46n2 commented on July 4, 2024

I believe they are talking about the default gradio setup. It's capped at 128 frames.

I'm assuming editing the .py file for the gradio you can bypass?

from moore-animateanyone.

lixunsong avatar lixunsong commented on July 4, 2024

I believe they are talking about the default gradio setup. It's capped at 128 frames.

I'm assuming editing the .py file for the gradio you can bypass?

Yes , you are right. In the gradio setup, the max video length is 128. You can edit the code for a longer generation.

from moore-animateanyone.

G-force78 avatar G-force78 commented on July 4, 2024

I believe they are talking about the default gradio setup. It's capped at 128 frames.
I'm assuming editing the .py file for the gradio you can bypass?

Yes , you are right. In the gradio setup, the max video length is 128. You can edit the code for a longer generation.

Is it fixed at 128 with pose2vid too? I seem to get errors when I increase it past it
Edit: I was setting it longer than the pose video.
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [498,0,0], thread: [95,0,0] Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.

Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/content/Moore-AnimateAnyone/scripts/pose2vid.py", line 167, in
main()
File "/content/Moore-AnimateAnyone/scripts/pose2vid.py", line 146, in main
video = pipe(
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/Moore-AnimateAnyone/src/pipelines/pipeline_pose2vid_long.py", line 525, in call
pred = self.denoising_unet(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/Moore-AnimateAnyone/src/models/unet_3d.py", line 493, in forward
sample, res_samples = downsample_block(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/Moore-AnimateAnyone/src/models/unet_3d_blocks.py", line 442, in forward
hidden_states = attn(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/Moore-AnimateAnyone/src/models/transformer_3d.py", line 140, in forward
hidden_states = block(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/content/Moore-AnimateAnyone/src/models/mutual_self_attention.py", line 180, in hacked_basic_transformer_inner_forward
norm_hidden_states[_uc_mask],
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

from moore-animateanyone.

lixunsong avatar lixunsong commented on July 4, 2024

If your pose video has less than 128 frames, the error will also be raised

from moore-animateanyone.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.