Giter VIP home page Giter VIP logo

adanerf's People

Contributors

thomasneff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

adanerf's Issues

Potential for a AMD ROCm port?

Hi! Just stumbled upon this and I'm incredibly impressed! I only have an AMD GPU at the moment and I'm curious about hos possible it would be to support ROCm as a backed. Torch has had support for it since March 2021: https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/

I notice that there's only one file that actually requires CUDA: https://github.com/thomasneff/AdaNeRF/blob/c6a64b8433d11684eb6b397cbc4666653d8018ae/src/native/disc_depth_multiclass_cuda.cu. Do you know how easy it'd be to port this to generic torch python code so that it could be used across different backends.

Thanks!

about Epoch

Hi! I appreciate to your awesome work.

However, I have a question about the epoch since you create a new data loader.

Does epoch in your code mean the same thing as iteration in traditional nerf code?
For example, 300000 epoch in your code equals 300000 iteration in traditional nerf code?

Or is it 1 epoch to go around the entire dataset as in traditional deep learning code?

X Error of failed request: GLXBadFBConfig

hello, thanks for your work and selfless sharing. it failed when i run ./adanerf:
X Error of failed request: GLXBadFBConfig
Major opcode of failed request: 146 (GLX)
Minor opcode of failed request: 0 ()
Serial number of failed request: 30
Current serial number in output stream: 30
Do you kown how to solve it? Thanks so much.

Generate more images!

Thanks for your wonderful work! Since the scene in DONeRF dataset is pretty large, I wonder if you can share the generation code of the DONeRF dataset so that I can apply adanerf for more positions.

Fine-tuning stage process and filtering sample adaptively

Hello,

I'm implementing your method with pure PyTorch code and it works before the finetuning stage including sample importance learning.

However, I have additional questions about the adaptive sampling and fine-tuning stage process.

Could you let me know where is exactly related to adaptive sampling with fine-tuning stage?

I implement the adaptive sampling based on learned sample importance with top-k algorithm after masking the importance value exceeding adaptive threshold.

Because of the batch-wise data format, the algorithm that I designed sets the rest of the importance as zero in following cases you mentioned in the paper.
image

In addition, I'm confusing about the actual meaning of the sentence in the paper(Section 3.2 - Fine-tuning):
Note that this phase results in separate shading networks for each maximum sample count, while all rely on the same sampling network.

However, it does not work and I'm still get hard to fix it.
Could you explain about the point in detail?

(I just add this implementation code for understanding my implementation. )
image

Image output quality when using Plenoxel with the DONeRF Dataset

I have been closely following your work, especially the experimental section and Supplementary Material of your paper. In your research, I noticed that you have applied the Plenoxel method to the DONeRF dataset. Inspired by your work, I recently attempted to use the Plenoxel method to train on the DONeRF dataset as well. However, I encountered an issue where the output images were incomplete and exhibited some displacement.

Through some experimentation, I found that adjusting the scene_center and scene_radius parameters in the Plenoxel code helped mitigate the image displacement issue. However, this requires setting a unique center and radius for each scene, which is not ideal. Moreover, I am struggling to determine the accurate values for the scene_center and scene_radius for each scene.

Could you please share how you successfully trained the DONeRF dataset using the Plenoxel method? Did you adjust the scene_center and scene_radius, or did you employ a different strategy? Any insights or suggestions you could provide would be greatly appreciated.

Thank you for your time and looking forward to your response.

Failed to train LLFF dataset

Hi! Thanks for your great work!
I have hard time training AdaNeRF with LLFF dataset. (fern)

At first it gave me pretty nice rgb and depth images, but from epoch 50k it loses fern's nearest leaves and produces bulky images.
I ran convert_llff.py with factor=8, and used same config file 'dense_training.ini'

If you used different configuration for LLFF dataset, could you share your config file?
Or may I know any tips for training?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.