Giter VIP home page Giter VIP logo

Comments (4)

pixeli99 avatar pixeli99 commented on July 23, 2024 1

Thank you again for your explanation~

from text-to-video-finetuning.

ExponentialML avatar ExponentialML commented on July 23, 2024

Hi, @ExponentialML

This is a really useful repo, but I have a question to ask. In the video fusion paper, it seems that they decouple the denoising process into base noise and residual noise. However, I couldn't find this in the pipeline of diffusers, which confuses me. Is this a completely new version?

Hey! In all honesty, I cannot verify that the Pipeline used from ModelScope's repository is verbatim with the paper. I only implemented this repository after the initial release of the ModelScope video diffusion model (referencing showlab's Tune-A-Video's repository for training).

Also, I havn't yet referenced any of the paper's implementations, but loosely follow others in same field. When I was asked as to which paper this was referencing, I found that the paper (assuming you found it on paperswithcode) and linked it as the closest candidate.

In terms of base noise along with a residual, I don't think that this would be too difficult to implement. I'll give it a go as a side experiment (not on my todo list at the moment), but in the mean time if you would like to, it would be great to open a PR and attempt to implement it.

from text-to-video-finetuning.

pixeli99 avatar pixeli99 commented on July 23, 2024

So for this repo, to achieve video generation, it just added a temporal layer in the unet, right? (Perhaps this is not a rigorous way to put it, but that's roughly what I see from the code). The reason I raised this issue is just to confirm if I missed any code details.

Regarding the implementation of basic noise and residual noise denoising you mentioned, I'd be happy to submit a PR for it. I'll implement it soon.

from text-to-video-finetuning.

ExponentialML avatar ExponentialML commented on July 23, 2024

That's the correct assumption. Loosely, there are temporal layers in the form of:

(b h w) f c batch, height, width, frames, channels

Then, double self attention is used such that it replaces the cross attention layer (text input in the majority cases for CrossAttention) before the feed forward. 3D temporal convolution layers are also added.

Thanks for willing to submit the PR! Looking forward to it.

from text-to-video-finetuning.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.