Giter VIP home page Giter VIP logo

ndr-code's People

Contributors

rainbowrui avatar ustc3dv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ndr-code's Issues

Self-collected plant dataset

Thanks for your great work.
I wonder if you can also share the self-collected plant dataset? I am considering doing some 4D reconstruction experiments for agricultural applications. I think this plant dataset can be a good starting point for prototyping.

发现了一个bug

在NDR-code/pose_initialization/registrate.py里面第90行 应该是if not os.path.exists(data_path+'intrinsics.txt'): 否则会导致无法加载文件的问题。

About the result of seq008 in Deep Deform test set!

Thank you for sharing this wonderful works. I am trying to reproduce the results in Figure 1. However, I failed and stuck on what to do next. I want to ask about the config of this sample in Figure 1 in your paper.
image

I think the sample is from seq008. However, using the provided config of this repo (120,000 iterations with 5e-4 lr) produced an underwhelming results (not as good as in Figure 1). Note that I followed strictly with the described preprocessing steps (delete invalid points and save as xyz and scale step3.py to ensure valid points inside unit ball), but I don't know if this is a problem on preprocessing, and I think maybe we can tweak the config, but I have no idea how to.

Below I show the meshes at (0,200,400,600,800)-th frames and validation in rgb and normal.
image
https://user-images.githubusercontent.com/17172232/203296534-238b4cc6-fac8-4763-abbf-2c9366953b7e.mp4
https://user-images.githubusercontent.com/17172232/203297124-489f5358-2b4c-451e-befa-ba97ccea1e09.mp4

The link to tensorboard log: https://drive.google.com/drive/folders/1yQ7WlEi8q0ULSRnFyP58dPBgf_3g-gpE?usp=sharing
Please help me. Thank you very much.

extract_canonical_geometry function is missing arguments

I'm trying to extract and visualize the canonical mesh, I saw that there's a function called validate_canonical_mesh which calls the extract_canonical_geometry.
The query function for the extract_canonical_geometry is lambda pts: -self.sdf_network.sdf(pts, alpha_ratio)) and it is missing the argument of topo_coord.

Is there a way to extract the canonical mesh? Are there values of topo_coord I can insert to the sdf which won't change the canonical mesh?

Thanks!

Goal of pose_initialization

Hi, thank you for the excellent work. I want to know the goal of pose_initialization? Does he have an alternative?
Thank you very much.

Implementation difference from the paper

Hi, thanks for sharing your inspiring work.

I found a slight difference between the implementation and the paper description.

In the paper the color_network takes the gradient on the canonical points as input normal.
截屏2022-12-01 上午10 24 02

However, it seems that in the code, the color_network takes the gradient on the observation point as input.

NDR-code/models/renderer.py

Lines 221 to 227 in f842e41

gradient_o = torch.autograd.grad(
outputs=y,
inputs=x,
grad_outputs=d_output,
create_graph=True,
retain_graph=True,
only_inputs=True)[0]

NDR-code/models/renderer.py

Lines 265 to 266 in f842e41

sampled_color = color_network(appearance_code, pts_canonical, gradients_o, \
dirs_c, feature_vector, alpha_ratio).reshape(batch_size, n_samples, 3)

Any hint on this?
Thanks

About deep deform dataset preprocessing.

Thank you for sharing your wonderful work. I was able to run the killing fusion sample with your instruction and am starting to move on to deep deform dataset. However, as you know Deep deform dataset does not provide mask for all frames (in train and val only a few main frames, and test does not provide any). So how did you create enough masks for the config file (./confs/ddeform_human.conf)? And what is the id of the provided config sample (because deepdeform follow seq<id> format.)?

Quality of the camera poses

Thank you for sharing the implementation. I have a question regarding the quality of the given camera poses.
It seems like the authors manually calculate the camera poses as below,

Screenshot 2023-02-07 at 1 48 44 PM

Then, the authors refine the camera poses by making them trainable parameters.

camera_trainable = True

Is there any reason behind this kind of system setup?

When I naively visualize point cloud from the origin dataset, the result looks as below,

Screenshot 2023-02-07 at 1 53 09 PM

I personally guess that the authors intentionally fit the camera poses in an object-centric manner.
Any comment will be welcomed.

not found plyfile

Thanks for the great work.

I am having trouble generating a ply file when I run this command
python geo_render.py ./datasets/kfusion_frog/ ./exp/kfusion_frog/result/ 120000

RuntimeError: File could not be read: ./exp/kfusion_frog/result/validations_meshes/00120000_0.ply

I'd like to know what the solution is.

Thanks.

Question abount the range of sampled points

I print out the range of sampled points in forward function of SDFNetwork, why does the coord of the input canonical points have values smaller than -4. and larger than 4. ?

Evaluation on the training scenes?

Thank you for sharing the implementation.

I tried to evaluate NDR in the duck scene. However, it seems like the training data and evaluation data are identical.
How did you split train/eval?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.