Giter VIP home page Giter VIP logo

Comments (14)

ken-ouyang avatar ken-ouyang commented on July 23, 2024 6

As noted in the Discussion section of our paper, the current method may not perform optimally for long sequences that involve significant deformation. This is because such sequences might require multiple canonical images, a feature that has not been implemented in the current version of our method. The default parameters are also designed for around 100 video frames.

For shorter video clips, however, our method should produce satisfactory results with proper parameters(e.g. annealed step, mlp size and so on). This is demonstrated as follows:

issue_translated.mp4

We are actively working towards enhancing the method to handle longer sequences and larger deformations especially for human. Please stay tuned.

from codef.

ken-ouyang avatar ken-ouyang commented on July 23, 2024 2

It appears that the reconstruction of the foreground object is not as expected (which is strange). I would like to provide a few suggestions that could potentially address this issue:

  1. Consider using grouped deformation fields, such as the approach used in Sam-Track, to initially segment the object. This method might lead to better isolation and therefore, improved reconstruction of the foreground object.
  2. Another option could be to increase the annealing step. This might allow for a more accurate and detailed reconstruction by gradually refining the model's approximation.
  3. For validation purposes, starting with a shorter video clip might be beneficial.

It's also worth considering that the motion in the video clip may be too rapid for the temporal grid to accurately capture it.

from codef.

ken-ouyang avatar ken-ouyang commented on July 23, 2024 1

@LyazS Yes. There are different designs to use multiple canonical spaces such as HyperNerf.

from codef.

ken-ouyang avatar ken-ouyang commented on July 23, 2024

How about the reconstruction quality? Can you show the link of the original videos?

from codef.

AbyssBadger0 avatar AbyssBadger0 commented on July 23, 2024

This is the original video
https://github.com/qiuyu96/CoDeF/assets/125934639/e3349b8b-2551-4273-8def-2dc8479ba589
This is the the reconstruction
https://github.com/qiuyu96/CoDeF/assets/125934639/a4f94bbf-2ddc-4b5a-9197-2a33a9844fc0

from codef.

AbyssBadger0 avatar AbyssBadger0 commented on July 23, 2024

I tried a video with slightly smaller character movements again, and the effect was better than before, but there were always these floating textures that I don't know what they are
controlnet uses lineart and openpose
04844-3224275959-bcxyzw (style), lora_bcxyzw_0 5
I compressed the video in order to upload it

8.23.mp4

from codef.

LyazS avatar LyazS commented on July 23, 2024

Is there anyway to add multiple canonical images?

from codef.

xpeng avatar xpeng commented on July 23, 2024

if use flow, must i add flow_dir in config file?
I found the reconstruction result not much different, no matter using python train.py with or without '--flow_dir'.

from codef.

ken-ouyang avatar ken-ouyang commented on July 23, 2024

@xpeng The flow is optional for training. The image quality with or without flow is quite similar. But the video reconstructed with flow contains less flickering.

from codef.

xpeng avatar xpeng commented on July 23, 2024

@xpeng The flow is optional for training. The image quality with or without flow is quite similar. But the video reconstructed with flow contains less flickering.

thanks for attention, i will experiment more

from codef.

AbyssBadger0 avatar AbyssBadger0 commented on July 23, 2024

Thank you for your reply and answer. I will continue to follow and try!

from codef.

zhanghongyong123456 avatar zhanghongyong123456 commented on July 23, 2024

As noted in the Discussion section of our paper, the current method may not perform optimally for long sequences that involve significant deformation. This is because such sequences might require multiple canonical images, a feature that has not been implemented in the current version of our method. The default parameters are also designed for around 100 video frames.

For shorter video clips, however, our method should produce satisfactory results with proper parameters(e.g. annealed step, mlp size and so on). This is demonstrated as follows:

issue_translated.mp4
We are actively working towards enhancing the method to handle longer sequences and larger deformations especially for human. Please stay tuned.

about mlp size, Where is mlp size set, not found,is it here, which parameter should be set
image

from codef.

ken-ouyang avatar ken-ouyang commented on July 23, 2024

@zhanghongyong123456 The mlp size for hash is in config.json. The hyperparameter here is for positional encoding (in case Hash is not adopted.)

from codef.

zhanghongyong123456 avatar zhanghongyong123456 commented on July 23, 2024

@zhanghongyong123456 The mlp size for hash is in config.json. The hyperparameter here is for positional encoding (in case Hash is not adopted.)

How to modify mlp parameters can improve the reconstruction effect
image

from codef.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.