Giter VIP home page Giter VIP logo

nglod's Introduction

Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Surfaces

Official code release for NGLOD. For technical details, please refer to:

Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Surfaces
Towaki Takikawa*, Joey Litalien*, Kangxue Xin, Karsten Kreis, Charles Loop, Derek Nowrouzezahrai, Alec Jacobson, Morgan McGuire, and Sanja Fidler
In Computer Vision and Pattern Recognition (CVPR), 2021 (Oral)
[Paper] [Bibtex] [Project Page]

If you find this code useful, please consider citing:

@article{takikawa2021nglod,
    title = {Neural Geometric Level of Detail: Real-time Rendering with Implicit {3D} Shapes}, 
    author = {Towaki Takikawa and
              Joey Litalien and 
              Kangxue Yin and 
              Karsten Kreis and 
              Charles Loop and 
              Derek Nowrouzezahrai and 
              Alec Jacobson and 
              Morgan McGuire and 
              Sanja Fidler},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year = {2021},
}

New: Sparse training code with Kaolin now available in app/spc! Read more about it here

Directory Structure

sol-renderer contains our real-time rendering code.

sdf-net contains our training code.

Within sdf-net:

sdf-net/lib contains all of our core codebase.

sdf-net/app contains standalone applications that users can run.

Getting started

Python dependencies

The easiest way to get started is to create a virtual Python 3.8 environment:

conda create -n nglod python=3.8
conda activate nglod
pip install --upgrade pip
pip install -r ./infra/requirements.txt

The code also relies on OpenEXR, which requires a system library:

sudo apt install libopenexr-dev 
pip install pyexr

To see the full list of dependencies, see the requirements.

Building CUDA extensions

To build the corresponding CUDA kernels, run:

cd sdf-net/lib/extensions
chmod +x build_ext.sh && ./build_ext.sh

The above instructions were tested on Ubuntu 18.04/20.04 with CUDA 10.2/11.1.

Training & Rendering

Note. All following commands should be ran within the sdf-net directory.

Download sample data

To download a cool armadillo:

wget https://raw.githubusercontent.com/alecjacobson/common-3d-test-models/master/data/armadillo.obj -P data/

To download a cool matcap file:

wget https://raw.githubusercontent.com/nidorx/matcaps/master/1024/6E8C48_B8CDA7_344018_A8BC94.png -O data/matcap/green.png

Training from scratch

python app/main.py \
    --net OctreeSDF \
    --num-lods 5 \
    --dataset-path data/armadillo.obj \
    --epoch 250 \
    --exp-name armadillo

This will populate _results with TensorBoard logs.

Rendering the trained model

If you set custom network parameters in training, you need to also reflect them for the renderer.

For example, if you set --feature-dim 16 above, you need to set it here too.

python app/sdf_renderer.py \
    --net OctreeSDF \
    --num-lods 5 \
    --pretrained _results/models/armadillo.pth \
    --render-res 1280 720 \
    --shading-mode matcap \
    --lod 4

By default, this will populate _results with the rendered image.

If you want to export a .npz model which can be loaded into the C++ real-time renderer, add the argument --export path/file.npz. Note that the renderer only supports the base Neural LOD configuration (the default parameters with OctreeSDF).

Core Library Development Guide

To add new functionality, you will likely want to make edits to the files in lib.

We try our best to keep our code modular, such that key components such as trainer.py and renderer.py need not be modified very frequently to add new functionalities.

To add a new network architecture for an example, you can simply add a new Python file in lib/models that inherits from a base class of choice. You will probably only need to implement the sdf method which implements the forward pass, but you have the option to override other methods as needed if more custom operations are needed.

By default, the loss function used are defined in a CLI argument, which the code will automatically parse and iterate through each loss function. The network architecture class is similarly defined in the CLI argument; simply use the exact class name, and don't forget to add a line in __init__.py to resolve the namespace.

App Development Guide

To make apps that use the core library, add the sdf-net directory into the Python sys.path, so the modules can be loaded correctly. Then, you will likely want to inherit the same CLI parser defined in lib/options.py to save time. You can then add a new argument group app to the parser to add custom CLI arguments to be used in conjunction with the defaults. See app/sdf_renderer.py for an example.

Examples of things that are considered apps include, but are not limited to:

  • visualizers
  • training code
  • downstream applications

Third-Party Libraries

This code includes code derived from 3 third-party libraries, all distributed under the MIT License:

https://github.com/zekunhao1995/DualSDF

https://github.com/rogersce/cnpy

https://github.com/krrish94/nerf-pytorch

Acknowledgements

We would like to thank Jean-Francois Lafleche, Peter Shirley, Kevin Xie, Jonathan Granskog, Alex Evans, and Alex Bie at NVIDIA for interesting discussions throughout the project. We also thank Peter Shirley, Alexander Majercik, Jacob Munkberg, David Luebke, Jonah Philion and Jun Gao for their help with paper editing.

We also thank Clement Fuji Tsang for his help with the code release.

The structure of this repo was inspired by PIFu: https://github.com/shunsukesaito/PIFu

nglod's People

Contributors

jason718 avatar joeylitalien avatar marcelsan avatar orperel avatar tovacinni avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nglod's Issues

Questions about training for multiple shapes and inference

Hello, thanks for the wonderful works and code!
While trying to reproduce this work for shapenet150, I've got several questions.

  1. It seems that the released code is for training network for single mesh data. How should the training for multiple shapes be done? AFAI understand, each shape has its own sparse feature (defined with FeatureVolume in the code). Am I getting right so that I have to train individual shape with shared mlp and different feature volume, or the model is the same through the entire dataset, which means feature volume is also shared?

1-1) If feature volume is also trained for all data, how is the minibatch consist? Should the shape be sampled from dataset so that dimension is (num_shape) x (num_samples) x (dimension)? Or should the point be sampled from dataset regardless of the shape? (in this case, the minibatch dim will be (num_samples) x (dimension))

  1. What is the inference process when we have fully trained model and arbitrary pointcloud which does not have groundtruth sdf value? In case of DeepSDF, the trained net is fixed and feature vector is trained with slightly different loss function when inferring.

Sorry for the broken english. If you don't understand those questions, I'll explain in detail.
Thanks in advance!

Can't export .npz files

Hi, when running the example command in the README for rendering, along with the --export flag (python app/sdf_renderer.py \ --net OctreeSDF \ --num-lods 5 \ --pretrained _results/models/armadillo.pth \ --render-res 1280 720 \ --shading-mode matcap \ --lod 4 --export file.npz

I get the following error message
Total number of parameters: 10146213 Traceback (most recent call last): File "app/sdf_renderer.py", line 106, in <module> net = SOL_NGLOD(net) File "/home/luis/nglod/sdf-net/lib/models/SOL_NGLOD.py", line 50, in __init__ self.vs = voxel_sparsify(2000000, net, self.lod, sol=False) File "/home/luis/nglod/sdf-net/lib/renderutils.py", line 63, in voxel_sparsify surface = sample_surface(n, net, sol=sol, device=device)[:n] File "/home/luis/nglod/sdf-net/lib/renderutils.py", line 33, in sample_surface tracer = SphereTracer(device, sol=sol) TypeError: __init__() got an unexpected keyword argument 'sol'

I cannot figure out how to fix it.

Thank you in advance

ShapeNet150 data

Great work! I am wondering for the shapenet150 data, the airplane, car and chair categories are used. But how did you select the 50 models within each category?

Training with surface normal / sdf gradient supervision

From the paper it seems the models are trained without surface normal supervision or sdf gradient supervision.

I would like to know if nglod can support these supervision as they are widely used in recent works on neural implicit modeling.

Regards

Is there an ETA on the code?

Hello @tovacinni, thanks for this great work! The results are really impressive. I'm also very interested since it can be seamlessly integrated into a 3D reconstruction project I'm currently working on.
You said on Twitter that the code will be released soon. Could you please give a more detailed ETA on the code release? For example in how many weeks can it be released? It would be really helpful for me to plan my current project accordingly.
Thanks very much!

Selected model list of Thingi-32 and TurboSquid

Hi, I wonder that which models are used in the main paper.

I understood that you make datasets via following procedure.

  1. Thingi32: Selecting only 32 models from Thingi-10K(10000 models).
  2. TurboSquid: Downloading 16 models from site TurboSquid.
    -> Could I get the model list of both two datasets?

The accuracy of predicted sdf function

Hi, thank you for your wonderful job. I have tested the performance on your models and get some trouble about the accuracy.
Here is my test process:
I have a mesh model: a sphere with center at (0,5,0,5,0,5) and with radius 0.5. So this mesh surface contains the points such as [0.5, 1, 0.5], [1, 0.5, 0.5], [0.5,0.5, 1]
I use your code to train a model with following commands: python --net OctreeSDF --num_lods 5 --dataset_path my.obj --epoch 250 --exp-name test

And then I use the model to get some sdf predicted value around the points , [0.5, 1, 0.5], [1, 0.5, 0.5], [0.5,0.5, 1].
I make up three points sets: [0.5, i, 0.5], [i, 0.5, 0.5], [0.5,0.5, i] for i in numpy.linspace(0.997, 1.003, 1001)
I test the accuracy of the model with these three points sets and get some results:
lADPJxuMPr9gd_DNARrNAYU_389_282

I find the the sdf values change their sign at [0.5, 0.99895, 0.5], [0.999027, 0.5, 0.5], [0.5,0.5, 0.999027], which are about 0.001 from the ground truth. And at points 0.5], [1, 0.5, 0.5], [0.5,0.5, 1], the predicted sdf values are about 0.002, which is also too big compared with the 1e-6 training L2 loss.

Could you please give some explaination about this phenomenon? And is there any suggestion to improve the accuracy of the predicted sdf value?

I'm looking forward to your reply. It is quite important for me.

Render OBJ files

Hi

I'm looking to evaluate the work using image-based metrics. Can you share mistsuba2 scene xml file for rendering mesh objects?

Building sol-rendere: CMAKE_CUDA_COMPILER not set, after EnableLanguage

I'm tying to build the sol-renderer, using cmake. I downloaded libtorch using this link here. Then every time I try to build it I have this error:

`CMake Error at CMakeLists.txt:23 (project):
Running

'nmake' '-?'

failed with:

The system cannot find the file specified

CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
CMake Error: CMAKE_CUDA_COMPILER not set, after EnableLanguage
-- Configuring incomplete, errors occurred!
See also "C:/Dev/Tree-SDF/sol-renderer/build/CMakeFiles/CMakeOutput.log".`

I have CUDA 11, in windows. the cuda path is already in the environment variables. If there is a specific version of libtorch you need could you provide it?

Question about the loss for training the network

Thanks for releasing your well-organized codes.

When read and ran the codes for training a network according to your instructions, I found that it seems that only the sdf values predicted by the deepest level were used for calculating the loss.

The figure below shows the codes from 'sdf-net/lib/models/OctreeSDF.py':

image

And in the paper, I noticed that Formula (4) takes sum of losses calculated for results from each level.

Is there anything that I misunderstood ?

Question on the environment required to run sol-renderer

Hi @tovacinni , thanks for this great work and the code release. I am trying to run your C++ renderer and meet the following segmentation fault. Can you guide me on how to solve this issue, at your convenience?

The system is Ubuntu 20.04. I've tried both rtx3090 and 1080 and neither of them works. By the way, the python part works well -- I can run the training and generate the rendered armadillo. The libtorch is downloaded from https://download.pytorch.org/libtorch/cu111/libtorch-cxx11-abi-shared-with-deps-1.8.1%2Bcu111.zip

Here is the error message:

    (nglod) my@ws:~/nglod/sol-renderer/build$ ./sdfRenderer ../../sdf-net/_results/armadillo.npz
    NLOD Demo starting...
    GPU Device 0: "Ampere" with compute capability 8.6
    
    terminate called after throwing an instance of 'c10::Error'
      what():  CUDA error: an illegal memory access was encountered
    Exception raised from nonzero_cuda_out_impl at /pytorch/aten/src/ATen/native/cuda/Indexing.cu:873 (most recent call first):
    frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x69 (0x7f6705badb29 in /home/my/nglod/sol-renderer/third-party/libtorch/lib/libc10.so)
    frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xd2 (0x7f6705baaab2 in /home/my/nglod/sol-renderer/third-party/libtorch/lib/libc10.so)
    frame #2: void at::native::nonzero_cuda_out_impl<bool>(at::Tensor const&, at::Tensor&) + 0xebe (0x7f66a6227c4e in /home/my/nglod/sol-renderer/third-party/libtorch/lib/libtorch_cuda_cu.so)
    frame #3: at::native::nonzero_out_cuda(at::Tensor&, at::Tensor const&) + 0x1eb (0x7f66a6199c5b in /home/my/nglod/sol-renderer/third-party/libtorch/lib/libtorch_cuda_cu.so)
    frame #4: at::native::nonzero_cuda(at::Tensor const&) + 0xea (0x7f66a619a09a in /home/my/nglod/sol-renderer/third-party/libtorch/lib/libtorch_cuda_cu.so)
    frame #5: <unknown function> + 0x2e6a80b (0x7f66a6fd180b in /home/my/nglod/sol-renderer/third-party/libtorch/lib/libtorch_cuda_cu.so)
    frame #6: <unknown function> + 0x2e6a890 (0x7f66a6fd1890 in /home/my/nglod/sol-renderer/third-party/libtorch/lib/libtorch_cuda_cu.so)
    frame #7: at::Tensor c10::Dispatcher::call<at::Tensor, at::Tensor const&>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&)> const&, at::Tensor const&) const + 0xe7 (0x7f6692f17c57 in /home/my/nglod/sol-renderer/third-party/libtorch/lib/libtorch_cpu.so)
    frame #8: at::nonzero(at::Tensor const&) + 0x5e (0x7f6692d5338e in /home/my/nglod/sol-renderer/third-party/libtorch/lib/libtorch_cpu.so)
    frame #9: <unknown function> + 0x2f15a3e (0x7f6694791a3e in /home/my/nglod/sol-renderer/third-party/libtorch/lib/libtorch_cpu.so)
    frame #10: <unknown function> + 0x2f15ac0 (0x7f6694791ac0 in /home/my/nglod/sol-renderer/third-party/libtorch/lib/libtorch_cpu.so)
    frame #11: at::Tensor c10::Dispatcher::call<at::Tensor, at::Tensor const&>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&)> const&, at::Tensor const&) const + 0xe7 (0x7f6692f17c57 in /home/my/nglod/sol-renderer/third-party/libtorch/lib/libtorch_cpu.so)
    frame #12: at::nonzero(at::Tensor const&) + 0x5e (0x7f6692d5338e in /home/my/nglod/sol-renderer/third-party/libtorch/lib/libtorch_cpu.so)
    frame #13: <unknown function> + 0x4222b (0x555f01cd522b in ./sdfRenderer)
    frame #14: <unknown function> + 0x27750 (0x555f01cba750 in ./sdfRenderer)
    frame #15: <unknown function> + 0x1819a (0x555f01cab19a in ./sdfRenderer)
    frame #16: <unknown function> + 0x20194 (0x7f67060ed194 in /lib/x86_64-linux-gnu/libglut.so.3)
    frame #17: fgEnumWindows + 0x39 (0x7f67060f0c39 in /lib/x86_64-linux-gnu/libglut.so.3)
    frame #18: glutMainLoopEvent + 0x1cd (0x7f67060ed7bd in /lib/x86_64-linux-gnu/libglut.so.3)
    frame #19: glutMainLoop + 0x65 (0x7f67060edff5 in /lib/x86_64-linux-gnu/libglut.so.3)
    frame #20: <unknown function> + 0x18edc (0x555f01cabedc in ./sdfRenderer)
    frame #21: __libc_start_main + 0xf3 (0x7f6617f1a0b3 in /lib/x86_64-linux-gnu/libc.so.6)
    frame #22: <unknown function> + 0x1639e (0x555f01ca939e in ./sdfRenderer)
    
    Aborted (core dumped)

ModuleNotFoundError: No module named 'sol_nglod'

When I'm trying to train or render, I get:

Traceback (most recent call last):
File "app/main.py", line 32, in
from lib.trainer import Trainer
File "/usr/nglod/sdf-net/lib/trainer.py", line 47, in
from lib.models import *
File "/usr/nglod/sdf-net/lib/models/init.py", line 25, in
from .SOL_NGLOD import SOL_NGLOD
File "/usr/nglod/sdf-net/lib/models/SOL_NGLOD.py", line 25, in
import sol_nglod
ModuleNotFoundError: No module named 'sol_nglod'

There is indeed a sol_nglod dir in /sdf-net/lib/extensions/, and everything went well before this. I'm using a python 3.7 virtualenv because conda env always has gcc-5 issues when installing pyexr. Does anyone have idea about this? Thanks!

mesh2sdf errors

Hi,

I'm trying to build the sdf-net portion of the repository, but I'm having issues with the mesh2sdf library. I'm not entirely sure what library this corresponds to because it's not in the requirements.txt, but I've guessed that it must be the mesh-to-sdf library. I've tried this and changing the imports to match, but the build still breaks in compute_sdf.py. which uses mesh2sdf_gpu and doesn't exist in mesh-to-sdf.

So I'm curious what library I'm supposed to use for this or if this is just legacy code that needs to be reconfigured. I've had other dependency issues, but so far, those have been pretty easy to resolve. Happy to help contribute for this, but want to make sure I'm not making a glaring mistake.

Thanks.

About rendering 3D models

About Rendering

Hello(^^)
Nice to meet you!

I recently got interested in SDF and started to study it.
And I tried to run this code in google colab. I succeeded up to "rendering the learned model" and was able to create "armadillo.pth".

However, I do not know how to generate the 3D model from this point as described in the paper.

Please tell me how to do it.

Question about generate parents in create_trinkets.

Hi,
Thanks for a great work. I want to use your spc code, and found a function can generate a point's corners and parent. But the parent index generated has NAN values.

In my understanding, the map created in line 8 should be from motron code of level i's parents to the index of node in spc.points. But here the keys are motron code of nodes in level i, which is a mismatch from index used in pd.Series.reindex(i.e., the motron code of parents). Do you know how to fix this? Or it supposed to behave like this?

        if i == 0:
            parents.append(torch.LongTensor([-1]).cuda())
        else:
            # Dividing by 2 will yield the morton code of the parent
            pc = torch.floor(points / 2.0).short()
            mt_pc = spc_ops.points_to_morton(pc.contiguous())
            mt_pc_dest = spc_ops.points_to_morton(points)
            plut = dict(zip(mt_pc_dest.cpu().numpy(), np.arange(mt_pc_dest.shape[0])))
            pc_idx = pd.Series(plut).reindex(mt_pc.cpu().numpy()).values
            parents.append(torch.LongTensor(pc_idx).cuda())

By the way, I tried to modify code as following, it seems working

        if i == 0:
            parents.append(torch.LongTensor([-1]).cuda())
        else:
            # Dividing by 2 will yield the morton code of the parent
            pc = torch.floor(points / 2.0).short()
            mt_pc = spc_ops.points_to_morton(pc.contiguous())
            plut = dict(zip(mt_pc_dest.cpu().numpy(), np.arange(mt_pc_dest.shape[0])))
            pc_idx = pd.Series(plut).reindex(mt_pc.cpu().numpy()).values + pyramid[1, i-1].item()
            parents.append(torch.LongTensor(pc_idx).cuda())
        mt_pc_dest = spc_ops.points_to_morton(points)

Rendering on Mobile

Has anyone tried rendering on a mobile device? While I would assume that sphere casting is slower than a traditional rendering pipeline it seems to me that there benefits in how it scales. I deal a fair amount with user generated 3D models. Unlike, media like images or video, you can't just down res 3D models (duh).

Could a representation like this be used to ensure that content uses a predictable number of resources?

C++ Renderer: Failure to render higher LODs

Thank you very much for your work and the provided code!
I tried to run the C++ Renderer but encountered some problems when rendering higher LODs.
I am using Windows 10 / Libtorch 1.8 / CUDA 11.1 with a GeForce RTX 3090.

I had to make some changes to get the code running, especially changing some datatypes (e.g. from long to uint64_t). All the changes can be seen here coledea@9f2cebf.
Additionally I had to set g_TargetLevel to something smaller than g_SPC.getLevel() (e.g. 5). Otherwise I would get an illegal memory access error.
After that, the code compiled and I was able to run the application.

I trained on the armadillo model and I am able to render the neural representation with the provided python tools. With the C++ renderer, however, I am only able to render the lower LODs. On higher LODs, parts of the model are missing (as the pictures show) or I get the error CUDA Error: invalid configuration argument and nothing is rendered at all.

When activating debug output it shows for example a negative number for #elem in cf level 3:

offset_ on cf level 3 : 5122
# elem in cf level 3 : -1570503101
offset on parent nuggets array 4: 63
# elem in parent nuggets array 4: 156
offset on nuggets array 5 : 219
# elem in nuggets level 5 : 590

armadillo1
Lowest LOD.
armadillo2
Lowest LOD + 2
armadillo3
Lowest LOD + 3

Have you ever encountered this sort of problem and do you maybe know how to fix this?

EDIT: Interestingly, I am able to render higher LODs when using the model provided in #5. After clicking repeatedly +, I then get again the CUDA error (invalid configuration argument). I am using this .npz file: link

Question about the details of the decoder

HI, thanks for your excellent work! Your paper describes that the decoder occupies 90KB, but I can't find the corresponding implementation in the code. Can you tell me the details of the decoder? For example, which parts are included, thank you very much!

Question about the implementation of octree

Hi @tovacinni, thx for your excellent work! recently many sparse octree-based rendering papers are published e.g. PlenOctrees. However, in plenoctrees code for building octree, they initialize the octree with a dense buffer, like

class N3Tree(nn.Module):
    """
    PyTorch :math:`N^3`-tree library with CUDA acceleration.
    By :math:`N^3`-tree we mean a 3D tree with branching factor N at each interior node,
    where :math:`N=2` is the familiar octree.

.. warning::
    `nn.Parameters` can change size, which
    makes current optimizers invalid. If any refine() or
    shrink_to_fit() call returns True,
    please re-make any optimizers
    """
    def __init__(self, N=2, data_dim=4, depth_limit=10,
            init_reserve=1, init_refine=0, geom_resize_fact=1.5,
            radius=0.5, center=[0.5, 0.5, 0.5],
            data_format="RGBA",
            extra_data=None,
            map_location="cpu"):
        """
        Construct N^3 Tree

        :param N: int branching factor N
        :param data_dim: int size of data stored at each leaf
        :param depth_limit: int maximum depth of tree to stop branching/refining
        :param init_reserve: int amount of nodes to reserve initially
        :param init_refine: int number of times to refine entire tree initially
        :param geom_resize_fact: float geometric resizing factor
        :param radius: float or list, 1/2 side length of cube (possibly in each dim)
        :param center: list center of space
        :param data_format: a string to indicate the data format
        :param extra_data: extra data to include with tree
        :param map_location: str device to put data

        """
        super().__init__()
        assert N >= 2
        assert depth_limit >= 0
        self.N : int = N
        self.data_dim : int = data_dim

        if init_refine > 0:
            for i in range(1, init_refine + 1):
                init_reserve += (N ** i) ** 3
        
        # Here N is the voxel size. 
        self.register_parameter("data", nn.Parameter(torch.zeros(init_reserve, N, N, N, data_dim, device=map_location)))
        self.register_buffer("child", torch.zeros(init_reserve, N, N, N, dtype=torch.int32, device=map_location))

How's your implementation for octree? Will it still be sparse during the building and support large scale scene?
Looking forward to your answer.

Building CUDA extensions failed

The building process was not completed when I runned the command

chmod +x build_ext.sh && ./build_ext.sh

The error was rised:

TypeError: expected string or bytes-like object
./build_ext.sh: line 3: cd: sol_nglod: No such file or directory

the entire information is as follows

No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda-12.0'
running clean
'build/lib.linux-x86_64-cpython-38' does not exist -- can't clean it
'build/bdist.linux-x86_64' does not exist -- can't clean it
'build/scripts-3.8' does not exist -- can't clean it
running install
/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/command/easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
running bdist_egg
running egg_info
writing mesh2sdf.egg-info/PKG-INFO
writing dependency_links to mesh2sdf.egg-info/dependency_links.txt
writing top-level names to mesh2sdf.egg-info/top_level.txt
reading manifest file 'mesh2sdf.egg-info/SOURCES.txt'
adding license file 'NOTICE'
writing manifest file 'mesh2sdf.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
Traceback (most recent call last):
  File "setup.py", line 13, in <module>
    setup(
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/__init__.py", line 87, in setup
    return distutils.core.setup(**attrs)
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 185, in setup
    return run_commands(dist)
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
    dist.run_commands()
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
    self.run_command(cmd)
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/command/install.py", line 74, in run
    self.do_egg_install()
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/command/install.py", line 123, in do_egg_install
    self.run_command('bdist_egg')
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 165, in run
    cmd = self.call_command('install_lib', warn_dir=0)
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 151, in call_command
    self.run_command(cmdname)
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/command/install_lib.py", line 11, in run
    self.build()
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/_distutils/command/install_lib.py", line 112, in build
    self.run_command('build_ext')
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 84, in run
    _build_ext.run(self)
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 346, in run
    self.build_extensions()
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 436, in build_extensions
    self._check_cuda_version(compiler_name, compiler_version)
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 813, in _check_cuda_version
    torch_cuda_version = packaging.version.parse(torch.version.cuda)
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/pkg_resources/_vendor/packaging/version.py", line 49, in parse
    return Version(version)
  File "/home/quantum/anaconda3/envs/nglod/lib/python3.8/site-packages/pkg_resources/_vendor/packaging/version.py", line 264, in __init__
    match = self._regex.search(version)
TypeError: expected string or bytes-like object
./build_ext.sh: line 3: cd: sol_nglod: No such file or directory

Question about modeling a 3d shape using marching cube

Hi,
Thanks for your great work. I trained an OctreeSDF model on LOD5, and want to do marching cube that similar to SIREN. Unfortunately, it dosen't work. The output .ply model have no shape and all of noise in the space.

Sorry for the broken english and my stupid question.
Looking forward to your answer.

Storage problem of your paper

What is included in the storage volume(KB) of different LODs in Table 1 in your paper?Just the octree related data structures saved by the weight file?
Looking forward to your reply!

Error in Custom extensions

Hello everyone!

I get this error running this line of code python3 setup.py install --user to build the extension.

  • nvcc warning: The -std=c++14 flag is not supported with the configured host compiler. Flag will be ignored.

Have you ever encountered this sort of problem and do you maybe know how to fix this?

Metrics

Hi @tovacinni, thx for your excellent work!
After i read your paper,you sample 131072 points for Chamfer distance (in B.2. Reconstruction Metrics).I want to figure out how many points sampled for gIou.
Thanks again.

Args to Export .npz File

Does anyone know how to set the args to successfully run with --export PATH/TO/FILE.npz? It looks like there are a bunch of variables in lib/tracer//SphereTracer.py that are None during the rendering, such as self.num_steps, self.camera_clamp and self.min_dis etc.

Renderer app is broken

The sdf_renderer.py example from the README.md is currently broken,
seems like the app is outdated compared to the training example in main.py.

In particular, Renderer() requires a tracer arg (and missing from lib.tracer import * at the top, so SphereTracer can be constructed).

There are other issues beyond that (Renderer.eval() is called, but the class is not a torch module).

I'd use the C++ renderer, but I'm looking for a non default setting (3 LODs) which is not currently supported by the real time renderer as far as I understand.

Are there any workarounds or easy fixes around that?

Thanks in advance!

Error running TrimMesh

Hi, thank you for releasing the code! I've been looking into replicating your results, however, I'm running into an issue when preprocessing the input mesh and specifically while running the trimmesh operation here (it returns only False elements).

I'm using the armadillo mesh as input and running pytorch 1.8, CUDA 11.1, and as far as I can tell I managed to build all the extensions successfully. Have you seen this before? And would it be possible to upload the "normalized" version of the mesh so I can check if the rest of the pipeline works? Thanks a lot for the help and for your work!

Installation Help

Any advice on how to get nglod running on NVIDIA Ampere GPUs? I tried to install and run on a Ubuntu 20.04, CUDA 11.5, A4000 16 GB system but it doesn't work.
I don't think PyTorch 1.6 supports Ampere. Would it matter if I installed another version of pytroch?

Crash using Kaolin SPC

Hi there,

First of all congrats on your amazing work and for sharing it!

I am trying to run the Kaolin SPC (app/spc folder) implementation on the Armadillo only on the last LOD level (--return-lst)
I am using:

  • Ubuntu 18.04
  • pytorch 1.9.0
  • Kaolin 0.9.1

I had to do a few changes first:

  • change the amount of samples in mesh_to_octree() down to 1 000 000 vertices to run it on my little 1080ti
  • change the size of ths split in the SPCDataset to 10e6 to be compatible with the new samples size (as far as I understood).

Now I am getting a crash in the SPC.interpolate() method.
Unfortunately, as it happens in cuda, and the error log is not very meaningfull (see below)
I tracked it down to the following issue:
in line 97 of app/spc/SPC.py:
return self._interpolate(coeffs, self.features[lod][self.trinkets[pidx]])

self.trinkets[pidx] (resulting from the query) contains indices that are higher than the size of self.features[lod] where lod is the last level.

It should be easy to reproduce by downloading the Armadillo obj and run the following code:
python app/spc/main_spc.py --num-lods 5 --epoch 250 --exp-name armadillo --mesh-path data/armadillo.obj --return-lst

Thanks in advance,

Pierre

Here is my log up to crash:
[23/09 10:35:30] [INFO] Parameters:

           'l2_loss': 1.0,
           'mesh_path': 'data/armadillo.obj',
           'normalize_mesh': False},
  'dataset': { 'analytic': False,
               'block_res': 7,
               'build_dataset': False,
               'dataset_path': None,
               'exclude': None,
               'get_normals': False,
               'glsl_path': '../sdf-viewer/data-files/sdf',
               'include': None,
               'mesh_batch': False,
               'mesh_dataset': 'MeshDataset',
               'mesh_subset_size': -1,
               'num_samples': 100000,
               'raw_obj_path': None,
               'sample_mode': ['rand', 'near', 'near', 'trace', 'trace'],
               'sample_tex': False,
               'samples_per_voxel': 256,
               'train_valid_split': None,
               'trim': False,
               'viewer_path': '../sdf-viewer'},
  'global': { 'debug': False,
              'exp_name': 'armadillo',
              'ngc': False,
              'perf': False,
              'seed': None,
              'valid_every': 1,
              'valid_only': False,
              'validator': None},
  'net': { 'base_lod': 2,
           'feat_sum': False,
           'feature_dim': 32,
           'feature_size': 4,
           'ff_dim': -1,
           'ff_width': 16.0,
           'freeze': -1,
           'hidden_dim': 128,
           'jit': False,
           'joint_decoder': False,
           'joint_feature': False,
           'net': 'OverfitSDF',
           'num_layers': 1,
           'num_lods': 5,
           'periodic': False,
           'pos_enc': False,
           'pos_invariant': False,
           'pretrained': None,
           'skip': None},
  'optimizer': { 'grad_method': 'finitediff',
                 'loss': ['l2_loss'],
                 'lr': 0.001,
                 'optimizer': 'adam'},
  'optional arguments': {'help': None},
  'positional arguments': {},
  'renderer': { 'ao': False,
                'camera_clamp': [-5, 10],
                'camera_fov': 30,
                'camera_lookat': [0, 0, 0],
                'camera_origin': [-2.8, 2.8, -2.8],
                'camera_proj': 'persp',
                'ground_height': None,
                'interpolate': None,
                'lod': None,
                'matcap_path': 'data/matcap/green.png',
                'min_dis': 0.0003,
                'num_steps': 256,
                'render_batch': 0,
                'render_every': 1,
                'render_res': [512, 512],
                'shading_mode': 'matcap',
                'shadow': False,
                'sol': False,
                'step_size': 1.0,
                'tracer': 'SphereTracer'},
  'trainer': { 'batch_size': 512,
               'epochs': 250,
               'grow_every': -1,
               'growth_strategy': 'increase',
               'latent': False,
               'latent_dim': 128,
               'logs': '_results/logs/runs/',
               'loss_sample': -1,
               'model_path': '_results/models',
               'only_last': False,
               'resample_every': 10,
               'return_lst': True,
               'save_all': False,
               'save_as_new': False,
               'save_every': 1}}```
[23/09 10:35:30] [INFO] Training on None
[23/09 10:35:31] [INFO] Using GeForce GTX 1080 Ti with CUDA v11.1
[23/09 10:35:31] [INFO] Active LODs: [2, 2, 3, 4, 5]
[23/09 10:35:31] [INFO] Built dual octree and trinkets
[23/09 10:35:31] [INFO] # Feature Vectors: 9243
[23/09 10:35:32] [INFO] Total number of parameters: 317541
[23/09 10:35:32] [INFO] Block Indices: [0]
[23/09 10:35:32] [INFO] Model configured and ready to go
[23/09 10:35:32] [INFO] Initializing dataset...
[23/09 10:36:15] [INFO] Active Block IDX: 0
[23/09 10:36:15] [INFO] Resampling...
[23/09 10:36:15] [INFO] Permuted Samples
[23/09 10:36:15] [INFO] Reset DataLoader
/opt/conda/conda-bld/pytorch_1631630839582/work/aten/src/ATen/native/cuda/IndexKernel.cu:97: operator(): block: [151,0,0], thread: [32,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.

Mismatched model size for different LODs

I trained the model using the provided instructions in README. To validate the model size specified in the paper I split the model into separate models for each LOD, using the code below:

import torch
import torch.nn as nn
from lib.trainer import Trainer
from lib.models import OctreeSDF
from lib.options import parse_options
if __name__ == "__main__":
    # TODO: For every feature save model in a folder and check sizes
    # TODO: Find corresponding model
    pre_trained_path = "_results/models/armadillo.pth"
    models_path = '/'.join(pre_trained_path.split('/')[:-1])
    print(models_path)
    parser = parse_options(return_parser=True)
    args = parser.parse_args()
    use_cuda = torch.cuda.is_available()
    device = torch.device('cuda' if use_cuda else 'cpu')
    print(device)
    net = globals()['OctreeSDF'](args)
    # For faster inference
    if args.jit:
        net = torch.jit.script(net)
    net.load_state_dict(torch.load(pre_trained_path))
    net.to(device)
    net.eval()
    # Exctract features and for all level of detail
    lods = net.features
    decoders = net.louts
    # Here we are using number of lods to save every LOD model separately
    for lod_n in range(args.num_lods):
        print(f'{lod_n} Done!')
        net.features = nn.ModuleList([lods[lod_n]])
        net.louts = nn.ModuleList([decoders[lod_n]])
        # Change number of lods for each model
        net.num_lods = 1
        torch.save(net.state_dict(), os.path.join(models_path, f"test_lod_{lod_n}.pth"))

And got the following model sizes:

-rw-rw-r-- 1 user user  37K тра  2 22:43 test_lod_0.pth
-rw-rw-r-- 1 user user 112K тра  2 22:43 test_lod_1.pth
-rw-rw-r-- 1 user user 635K тра  2 22:43 test_lod_2.pth
-rw-rw-r-- 1 user user 4,5M тра  2 22:43 test_lod_3.pth
-rw-rw-r-- 1 user user 34M тра  2 22:43 test_lod_4.pth

Where I could be wrong?

Modeling multiple shapes & differentiable renderer

Thank you for this great work and the well-written paper and congrats on the talk!

I have two questions I was hoping you could clarify:

  1. I understand that your experiments consist of overfitting individual shapes. Have you tried modeling multiple shapes by means of an extension of or alternative to this idea (that is featured prominently in DeepSDF):
    "An optional input “shape” feature vector z∈Rm can be used to condition the network to fit different shapes with a fixed θ"

  2. You write "As such, we can leverage the same techniques proposed in these works to make our renderer also differentiable." Does that mean that this is possible future work or is it already implemented? I might have missed it in the paper, sorry if that's the case!

Thanks a lot in advance!

Code readability

Hi, thanks for releasing the excellent work. I want to figure out the real-time rendering, but I found there are no hints for helping me understand the code, and the paper also lacks discussions on implementation details. For example, what is SPC, and how it's worked? I want to know if there is some reference document? Thanks again.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.