Giter VIP home page Giter VIP logo

kilonerf's People

Contributors

creiser avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kilonerf's Issues

To avoid the Lack of GPU Memory

Hi

When I run the bash render_to_screen.sh, I'v got following error.
GPUassert: too many resources requested for launch network_eval.cu 292
Looks like my GPU memory is not enough, but is there any config to reduce the GPU memory size?

-- full log is ---

auto log path: logs/paper/finetune/Synthetic_NeRF_Lego
{'checkpoint_interval': 50000, 'chunk_size': 4000, 'distilled_cfg_path': 'cfgs/paper/distill/Synthetic_NeRF_Lego.yaml', 'distilled_checkpoint_path': 'logs/paper/distill/Synthetic_NeRF_Lego/checkpoint.pth', 'initial_learning_rate': 0.001, 'iterations': 1000000, 'l2_regularization_lambda': 1e-06, 'learing_rate_decay_rate': 500, 'no_batching': True, 'num_rays_per_batch': 8192, 'num_samples_per_ray': 384, 'occupancy_cfg_path': 'cfgs/paper/pretrain_occupancy/Synthetic_NeRF_Lego.yaml', 'occupancy_log_path': 'logs/paper/pretrain_occupancy/Synthetic_NeRF_Lego/occupancy.pth', 'perturb': 1.0, 'precrop_fraction': 0.5, 'precrop_iterations': 0, 'raw_noise_std': 0.0, 'render_only': False, 'no_color_sigmoid': False, 'render_test': True, 'render_factor': 0, 'testskip': 8, 'deepvoxels_shape': 'greek', 'blender_white_background': True, 'blender_half_res': False, 'llff_factor': 8, 'llff_no_ndc': False, 'llff_lindisp': False, 'llff_spherify': False, 'llff_hold': False, 'print_interval': 100, 'render_testset_interval': 10000, 'render_video_interval': 100000000, 'network_chunk_size': 65536, 'rng_seed': 0, 'use_same_initialization_for_all_networks': False, 'use_initialization_fix': False, 'num_importance_samples_per_ray': 0, 'model_type': 'multi_network', 'random_direction_probability': -1, 'von_mises_kappa': -1, 'view_dependent_dropout_probability': -1}
Using GPU: NVIDIA GeForce GTX 1660
/home/kevin/Documents/kilonerf-master/utils.py:254: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
  return np.array([[float(w) for w in line.strip().split()] for line in open(path)]).astype(np.float32)
Loaded a NSVF-style dataset (138, 800, 800, 4) (138, 4, 4) (0,) data/nsvf/Synthetic_NeRF/Lego
(100,) (13,) (25,)
Converting alpha to white.
global_domain_min: [-0.67 -1.2  -0.37], global_domain_max: [0.67 1.2  1.03], near: 2.0, far: 6.0, background_color: tensor([1., 1., 1.])
Loading logs/paper/finetune/Synthetic_NeRF_Lego/checkpoint_1000000.pth
Loading occupancy grid from logs/paper/pretrain_occupancy/Synthetic_NeRF_Lego/occupancy.pth
GPUassert: too many resources requested for launch network_eval.cu 292

Thanks

How to decide resolution for tiny MLPs in each scene

Thanks a lot for your great work!

I noticed the resolution (fixed_resolution) for tiny MLPs defined in cfgs/paper/distill/$SEQ.yaml is not the same across scenes:
for lego, [9,16,10], for hotdog: [16,16, 6].

So, I wonder how did you determine the resolution for each scene.

Here are my naive approaches:
a. just optimize for each scene using grid search
b. define the size for a voxel in each scene, then derive the resolution based on global_{max, min} and voxel size

  • e.g. ceil ((global_max - global_min) / voxel size)

I would appreciate if you have any suggestions.

The reason why distillation is necessary

From the result I can see that the result without distillation has artifacts. However I don't understand why.
Did you dig into the w/o distillation model to see where the artifacts come from?
Is it a problem occurring at grid boundaries? Since the MLPs are all independent, it is possible that at boundaries the adjacent MLPs generate very different results, which might cause artifacts.

Global domain size and meaning

Any suggestions on how to estimate the global domain size?

  • It defined at this line.
  • Could it be understood as the spatial region where ray tracking integrals of color value will be computed?
    • Then we just need to guess the 3D bounding box for the region of interest?

KiloNeRF CUDA Extension Documentation or Usage Quickstart

Hi! You produced some awesome work obviously. For my application I need to render many MLP's with variable sized inputs as you did here. Do you have any instructions for a quickstart or documentation on the kilonerf_cuda extension's usage?

AttributeError: 'Node' object has no attribute 'leq_child'

Hi, thanks for your detailed codes.

I followed the steps in 'train.sh', training a vanilla teacher, extracting occuapcy and distilling the student, these three steps go smoothly but when I run the finetune command, the bug in the title appears, do you have any ideas?

{'checkpoint_interval': 50000, 'chunk_size': 40000, 'distilled_cfg_path': 'cfgs/paper/distill/Synthetic_NeRF_Lego.yaml', 'distille
d_checkpoint_path': 'logs/paper/distill/Synthetic_NeRF_Lego/checkpoint.pth', 'initial_learning_rate': 0.001, 'iterations': 1000000
, 'l2_regularization_lambda': 1e-06, 'learing_rate_decay_rate': 500, 'no_batching': True, 'num_rays_per_batch': 8192, 'num_samples
_per_ray': 384, 'occupancy_cfg_path': 'cfgs/paper/pretrain_occupancy/Synthetic_NeRF_Lego.yaml', 'occupancy_log_path': 'logs/paper/
pretrain_occupancy/Synthetic_NeRF_Lego/occupancy.pth', 'perturb': 1.0, 'precrop_fraction': 0.5, 'precrop_iterations': 0, 'raw_nois
e_std': 0.0, 'render_only': True, 'render_test': True, 'no_color_sigmoid': False, 'render_factor': 0, 'testskip': 8, 'deepvoxels_s
hape': 'greek', 'blender_white_background': True, 'blender_half_res': False, 'llff_factor': 8, 'llff_no_ndc': False, 'llff_lindisp
': False, 'llff_spherify': False, 'llff_hold': False, 'print_interval': 100, 'render_testset_interval': 10000, 'render_video_inter
val': 100000000, 'network_chunk_size': 65536, 'rng_seed': 0, 'use_same_initialization_for_all_networks': False, 'use_initializatio
n_fix': False, 'num_importance_samples_per_ray': 0, 'model_type': 'multi_network', 'random_direction_probability': -1, 'von_mises_
kappa': -1, 'view_dependent_dropout_probability': -1}
Using GPU: GeForce RTX 2080 Ti
Loaded a NSVF-style dataset (138, 800, 800, 4) (138, 4, 4) (0,) data/nsvf/Synthetic_NeRF/Lego
(100,) (13,) (25,)
Converting alpha to white.
global_domain_min: [-0.67 -1.2  -0.37], global_domain_max: [0.67 1.2  1.03], near: 2.0, far: 6.0, background_color: tensor([1., 1.
, 1.])
Traceback (most recent call last):
  File "run_nerf.py", line 1087, in <module>
    main()
  File "run_nerf.py", line 1083, in main
    restarting_job = train(cfg, log_path, render_cfg_path)
  File "run_nerf.py", line 711, in train
    nodes_to_process.append(node.leq_child)
AttributeError: 'Node' object has no attribute 'leq_child'

Low PSNR on custom dataset

Hello, first of all thanks for your very interesting work.

I'm trying to train a very simple custom scene, but the PSNR remains low even after many iterations on the Vanilla NeRF.
The Lego data set has a mean PSNR of 16 on the test set after only 500 iterations, while mine has a mean PSNR of 12 after 10k iterations.
I generated my dataset with BlenderProc using the same poses of the Lego dataset, and I checked that the intrinsics, bbox and poses are taken correctly.

These are two images of the train set:

And these are the results after 10k iterations on the test set:

And if I try to render images of the train set the results are slightly better, but still not acceptable.

I can see during the training that the Loss is decreasing and the PSNR is increasing but not as fast as it did with the Lego data set, and they are oscillating a lot.

Screenshot from 2021-11-15 12-37-18

Do you have any suggestion to fix this behavior?
Thanks a lot!

Best way to train model for LLFF dataset

Thank you for your contribution.

I want to train the KiloNeRF for LLFF dataset.

How do I set the best configuration for the best training in LLFF dataset?
Such as follows: Resolution, ,Global_domain_min, Global_domain_max, fixed_resolution

Error when running cuda code

Thanks for your great work. I followed your instruction, but met some errors while importing kilonerf cuda:
Traceback (most recent call last):
File "run_nerf.py", line 21, in
from run_nerf_helpers import *
File "/vol/datastore/xxx/kilonerf/run_nerf_helpers.py", line 6, in
import kilonerf_cuda
ImportError: /media/sda1/xxx/anaconda3/envs/py3-mink/lib/python3.8/site-packages/kilonerf_cuda.cpython-38-x86_64-linux-gnu.so: undefined symbol: MKL_Get_Version

But I successfully installed your cuda library with:
$ pip install $KILONERF_HOME/cuda/dist/kilonerf_cuda-0.0.0-cp38-cp38-linux_x86_64.whl
Processing ./cuda/dist/kilonerf_cuda-0.0.0-cp38-cp38-linux_x86_64.whl
kilonerf-cuda is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel.

Are there any suggestions?

bmm vs cuda implementation

Hi, thanks for opening the source code!

I'm just curious about the acceleration speed of the cuda implementation compared to pytorch.bmm operation, if the input to each MLPs are equal. The test() part code in multi_module.py cannot run successfully due to some flags, and I have no idea how to measure the speed of the cuda implementation against bmm. Could you please give me some guidance? Thanks!

Custom dataset training

Hey,

I really appreciated with this work. Can you tell me how can i train my custom dataset.

Thank you.

multi-network using for the distill procedure

Hi! Firstly, I would like to thank you for your outstanding work speeding up NeRF.

I have trained a multi-network in the pretrain procedure(with 27 middle MLP).I want to know can I use this model as the pretrain in your code during the distill procedure.

Segmentation fault

is there any good solution to solve the segmentation fault occured randomly when training pretrain model?

About inference

Hi! Currently, I encounter a bug. It's ok when I do the training but when I do the inference, it says that:
GPUassert: too many resources requested for launch network_eval.cu 292

My GPU is RTX Titan
My CUDA version is 11.1
My CuDNN version is 8
My OS is Ubuntu 18.04

The number of layers and hidden dim are fixed in the CUDA implementation

Hi Christian @creiser !

Thank you a lot for the great work and the open-sourced code! It is very helpful in the NeRF community!

I noticed that the number of layers and hidden dims seems to be fixed in the CUDA implementation, regardless of the number of layers and hidden dim we set in the .yaml file.

If I understand it correctly:

I was wondering:

  • The reason why you fixed them here (for less pre-allocated memory?)
  • Do you have any plan to extend it to support an arbitrary number of layers and hidden dim?

Thank you!

Add Licence?

Hi, nice implementation! Is it possible for you to add a license for this?

Error when training a model

Hi,
I am trying to train a model. I saw that there is the file train.sh with the lines of code to do it.
However, when I want to run it I get an error.

image

I am running everything on a docker where I installed everything following the steps on the main page.
On the other hand, if I run the trained model benchmark.sh I don't get problems, but when using this one I do.
I would be very grateful for help if anyone else has had this problem.

Running our of RAM memory during distillation

Hi! Firstly, I would like to thank you for work on KiloNeRF and the benefits it provides over basic NeRFs.

I added another output layer to the base NeRF and now the local_distill.py is always killed during its execution. My system reports that the RAM is running out of memory. Did you experience anything similar and know how to fix that? Is maybe the network that has to be distilled too big?

3D model from real images

Hello!
Is it possible to extract 3D mesh model from this project?
I want to generate 3D model from model trained with custom dataset.
Could you tell me how to create the 3D model from real images?
image

How to execute model without CUDA extension installation

Thank you for your contribution.

Custom CUDA-based code implementation shows extremely high-speed NeRF execution.
However, Is there any method to execute KiloNeRF without CUDA installation?
Since my GPU environment seems not suitable to install your custom CUDA (kilonerf_cuda), I want to execute the code without custom CUDA-based acceleration. I just need the same functionality given by the kilonerf_cuda library.

Is there any method to execute your code without custom CUDA installation? I want to simulate the same functionality even though it is much slower than the current custom CUDA-based implementation.

(The code seems single NeRF training is possible without CUDA installation. However, early ray termination and early space skipping, multi NeRF training is not possible, I think.)

Question on the activation for density in student NN for distillation

Thanks a lot for your great work.

When debugging distillation, I found that sometimes alpha from student NN takes the negative value (but very close to 0).
This comes from using different activation for density in teacher NN and student NN:

I guess the difference may be minor, but is there any reason for using leaky Relu for student NN?

Resolution in pretrain_occupancy cfg file

Hi! First of all, I would like to thank you for your incredible work speeding up NeRF.

I'm trying to train a new model, but I'm struggling with the configuration. I've read the paper but I can't seem to understand what 'resolution' on cfg files stands for, so I'm not sure which values I should have there for my model. If you could guide me a little with this I would be immensely grateful.

Thank you.

Inconsistent LPIPS implementation?

Really great work!

I have a question about current LPIPS implementation.
From the lpips document, the input RGB images should be in scale [-1,+1].
But it seems that the current implementation feed RGB in [0, 1] to lpips.

I have also traced the baseline NSVF's lpips evaluation and they scale the RGB from [0, 1] to [-1, 1] (see their code).

I would appreciate clarification about this.
Thanks!

CUDA error at /home/chris/anti/cuda/render_to_screen.cpp:113 code=999(cudaErrorUnknown)

Hi! I met this CUDA error while running render_to_screen.sh:
CUDA error at /home/chris/anti/cuda/render_to_screen.cpp:113 code=999(cudaErrorUnknown) "cudaGraphicsGLRegisterBuffer(&cuda_pbo_resource, pbo, cudaGraphicsMapFlagsWriteDiscard)" render_to_screen.sh: line 3: 28062 Segmentation fault (core dumped) python run_nerf.py cfgs/paper/finetune/$DATASET.yaml -rcfg cfgs/render/render_to_screen.yaml

I'm running kilonerf on Ubuntu18.04, CUDA11.1, GPU is A6000.
Could you please help me with this? Thank you very much!

Here's the output:

(kilonerf) nesc525@nesc525:~/drivers/5/kilonerf$ bash render_to_screen.sh
auto log path: logs/paper/finetune/Synthetic_NeRF_Lego
{'checkpoint_interval': 50000, 'chunk_size': 40000, 'distilled_cfg_path': 'cfgs/paper/distill/Synthetic_NeRF_Lego.yaml', 'distilled_checkpoint_path': 'logs/paper/distill/Synthetic_NeRF_Lego/checkpoint.pth', 'initial_learning_rate': 0.001, 'iterations': 1000000, 'l2_regularization_lambda': 1e-06, 'learing_rate_decay_rate': 500, 'no_batching': True, 'num_rays_per_batch': 8192, 'num_samples_per_ray': 384, 'occupancy_cfg_path': 'cfgs/paper/pretrain_occupancy/Synthetic_NeRF_Lego.yaml', 'occupancy_log_path': 'logs/paper/pretrain_occupancy/Synthetic_NeRF_Lego/occupancy.pth', 'perturb': 1.0, 'precrop_fraction': 0.5, 'precrop_iterations': 0, 'raw_noise_std': 0.0, 'render_only': False, 'no_color_sigmoid': False, 'render_test': True, 'render_factor': 0, 'testskip': 8, 'deepvoxels_shape': 'greek', 'blender_white_background': True, 'blender_half_res': False, 'llff_factor': 8, 'llff_no_ndc': False, 'llff_lindisp': False, 'llff_spherify': False, 'llff_hold': False, 'print_interval': 100, 'render_testset_interval': 10000, 'render_video_interval': 100000000, 'network_chunk_size': 65536, 'rng_seed': 0, 'use_same_initialization_for_all_networks': False, 'use_initialization_fix': False, 'num_importance_samples_per_ray': 0, 'model_type': 'multi_network', 'random_direction_probability': -1, 'von_mises_kappa': -1, 'view_dependent_dropout_probability': -1}
Using GPU: RTX A6000
/home/nesc525/drivers/5/kilonerf/utils.py:254: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
  return np.array([[float(w) for w in line.strip().split()] for line in open(path)]).astype(np.float32)
Loaded a NSVF-style dataset (138, 800, 800, 4) (138, 4, 4) (0,) data/nsvf/Synthetic_NeRF/Lego
(100,) (13,) (25,)
Converting alpha to white.
global_domain_min: [-0.67 -1.2  -0.37], global_domain_max: [0.67 1.2  1.03], near: 2.0, far: 6.0, background_color: tensor([1., 1., 1.])
Loading logs/paper/finetune/Synthetic_NeRF_Lego/checkpoint_1000000.pth
Loading occupancy grid from logs/paper/pretrain_occupancy/Synthetic_NeRF_Lego/occupancy.pth
CUDA error at /home/chris/anti/cuda/render_to_screen.cpp:113 code=999(cudaErrorUnknown) "cudaGraphicsGLRegisterBuffer(&cuda_pbo_resource, pbo, cudaGraphicsMapFlagsWriteDiscard)" 
render_to_screen.sh: line 3: 28062 Segmentation fault      (core dumped) python run_nerf.py cfgs/paper/finetune/$DATASET.yaml -rcfg cfgs/render/render_to_screen.yaml

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.