ustc3dv / ndr-code Goto Github PK
View Code? Open in Web Editor NEW【NeurIPS 2022 Spotlight】Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera
License: MIT License
【NeurIPS 2022 Spotlight】Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera
License: MIT License
Thanks for your great work.
I wonder if you can also share the self-collected plant dataset? I am considering doing some 4D reconstruction experiments for agricultural applications. I think this plant dataset can be a good starting point for prototyping.
在NDR-code/pose_initialization/registrate.py里面第90行 应该是if not os.path.exists(data_path+'intrinsics.txt'): 否则会导致无法加载文件的问题。
Thank you for sharing this wonderful works. I am trying to reproduce the results in Figure 1. However, I failed and stuck on what to do next. I want to ask about the config of this sample in Figure 1 in your paper.
I think the sample is from seq008. However, using the provided config of this repo (120,000 iterations with 5e-4 lr) produced an underwhelming results (not as good as in Figure 1). Note that I followed strictly with the described preprocessing steps (delete invalid points and save as xyz and scale step3.py to ensure valid points inside unit ball), but I don't know if this is a problem on preprocessing, and I think maybe we can tweak the config, but I have no idea how to.
Below I show the meshes at (0,200,400,600,800)-th frames and validation in rgb and normal.
https://user-images.githubusercontent.com/17172232/203296534-238b4cc6-fac8-4763-abbf-2c9366953b7e.mp4
https://user-images.githubusercontent.com/17172232/203297124-489f5358-2b4c-451e-befa-ba97ccea1e09.mp4
The link to tensorboard log: https://drive.google.com/drive/folders/1yQ7WlEi8q0ULSRnFyP58dPBgf_3g-gpE?usp=sharing
Please help me. Thank you very much.
I'm trying to extract and visualize the canonical mesh, I saw that there's a function called validate_canonical_mesh which calls the extract_canonical_geometry.
The query function for the extract_canonical_geometry is lambda pts: -self.sdf_network.sdf(pts, alpha_ratio)) and it is missing the argument of topo_coord.
Is there a way to extract the canonical mesh? Are there values of topo_coord I can insert to the sdf which won't change the canonical mesh?
Thanks!
Is it possible to also upload the preprocessed data for the remaining sequences of the KillingFusion dataset, e.g., "Alex", "Hat"?
I would really appreciate it.
Hi, thank you for the excellent work. I want to know the goal of pose_initialization? Does he have an alternative?
Thank you very much.
Hi, thanks for sharing your inspiring work.
I found a slight difference between the implementation and the paper description.
In the paper the color_network
takes the gradient on the canonical points as input normal.
However, it seems that in the code, the color_network
takes the gradient on the observation point as input.
Lines 221 to 227 in f842e41
Lines 265 to 266 in f842e41
Any hint on this?
Thanks
Thank you for sharing your wonderful work. I was able to run the killing fusion sample with your instruction and am starting to move on to deep deform dataset. However, as you know Deep deform dataset does not provide mask for all frames (in train and val only a few main frames, and test does not provide any). So how did you create enough masks for the config file (./confs/ddeform_human.conf)? And what is the id of the provided config sample (because deepdeform follow seq<id>
format.)?
Thank you for sharing the implementation. I have a question regarding the quality of the given camera poses.
It seems like the authors manually calculate the camera poses as below,
Then, the authors refine the camera poses by making them trainable parameters.
NDR-code/confs/kfusion_toy.conf
Line 23 in f842e41
When I naively visualize point cloud from the origin dataset, the result looks as below,
I personally guess that the authors intentionally fit the camera poses in an object-centric manner.
Any comment will be welcomed.
Thanks for the great work.
I am having trouble generating a ply file when I run this command
python geo_render.py ./datasets/kfusion_frog/ ./exp/kfusion_frog/result/ 120000
RuntimeError: File could not be read: ./exp/kfusion_frog/result/validations_meshes/00120000_0.ply
I'd like to know what the solution is.
Thanks.
I print out the range of sampled points in forward function of SDFNetwork, why does the coord of the input canonical points have values smaller than -4. and larger than 4. ?
Thank you for sharing the implementation.
I tried to evaluate NDR in the duck scene. However, it seems like the training data and evaluation data are identical.
How did you split train/eval?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.