Giter VIP home page Giter VIP logo

tensorf's Introduction

TensoRF

This repository contains a pytorch implementation for the paper: TensoRF: Tensorial Radiance Fields. Our work present a novel approach to model and reconstruct radiance fields, which achieves super fast training process, compact memory footprint and state-of-the-art rendering quality.

train_process.mp4

Installation

Tested on Ubuntu 20.04 + Pytorch 1.10.1

Install environment:

conda create -n TensoRF python=3.8
conda activate TensoRF
pip install torch torchvision
pip install tqdm scikit-image opencv-python configargparse lpips imageio-ffmpeg kornia lpips tensorboard

Dataset

Quick Start

The training script is in train.py, to train a TensoRF:

python train.py --config configs/lego.txt

we provide a few examples in the configuration folder, please note:

dataset_name, choices = ['blender', 'llff', 'nsvf', 'tankstemple'];

shadingMode, choices = ['MLP_Fea', 'SH'];

model_name, choices = ['TensorVMSplit', 'TensorCP'], corresponding to the VM and CP decomposition. You need to uncomment the last a few rows of the configuration file if you want to training with the TensorCP model;

n_lamb_sigma and n_lamb_sh are string type refer to the basis number of density and appearance along XYZ dimension;

N_voxel_init and N_voxel_final control the resolution of matrix and vector;

N_vis and vis_every control the visualization during training;

You need to set --render_test 1/--render_path 1 if you want to render testing views or path after training.

More options refer to the opt.py.

For pretrained checkpoints and results please see:

https://1drv.ms/u/s!Ard0t_p4QWIMgQ2qSEAs7MUk8hVw?e=dc6hBm

Rendering

python train.py --config configs/lego.txt --ckpt path/to/your/checkpoint --render_only 1 --render_test 1 

You can just simply pass --render_only 1 and --ckpt path/to/your/checkpoint to render images from a pre-trained checkpoint. You may also need to specify what you want to render, like --render_test 1, --render_train 1 or --render_path 1. The rendering results are located in your checkpoint folder.

Extracting mesh

You can also export the mesh by passing --export_mesh 1:

python train.py --config configs/lego.txt --ckpt path/to/your/checkpoint --export_mesh 1

Note: Please re-train the model and don't use the pretrained checkpoints provided by us for mesh extraction, because some render parameters has changed.

Training with your own data

We provide two options for training on your own image set:

  1. Following the instructions in the NSVF repo, then set the dataset_name to 'tankstemple'.
  2. Calibrating images with the script from NGP: python dataLoader/colmap2nerf.py --colmap_matcher exhaustive --run_colmap, then adjust the datadir in configs/your_own_data.txt. Please check the scene_bbox and near_far if you get abnormal results.

Citation

If you find our code or paper helps, please consider citing:

@INPROCEEDINGS{Chen2022ECCV,
  author = {Anpei Chen and Zexiang Xu and Andreas Geiger and Jingyi Yu and Hao Su},
  title = {TensoRF: Tensorial Radiance Fields},
  booktitle = {European Conference on Computer Vision (ECCV)},
  year = {2022}
}

tensorf's People

Contributors

apchenstu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorf's Issues

wondeful work, but a dumb question

TensoRF looks like small and very fast traning, is that possible to make the final RGBD result merged into a single point cloud? If so, how? that would be very helpful if can generates a final point cloud.

Great work!

The work is great, I have benefited a lot from it

Why SDF to ply?

Hi, thank you for your great work!
I noticed that there is a function named convert_sdf_samples_to_ply in utils.py when I tried to extract mesh. Could you please tell me how do you get SDF from alpha? I did not find this in the paper.

Thank you for your time.

how to get the mesh with color?

image

I use export_mesh. I want to get mesh with color, so what should I do? I notice that author uses marching_cubes to get verts, so have any idea to set verts with color? @apchenstu

thanks

Wrong PSNR value by the in-place operator?

Hi, @apchenstu

Thank you for sharing this really great idea!
It's very helpful for me to develop other ideas related to Radiance Fields.

By the way, I just notice that you might have wrong PSNR value by the in-place operator(+=).
You've assigned the image loss variable loss to total_loss, and then increase it with the += operator.
It will modify the original image loss loss variable resulting in wrong PSNR; in fact, smaller than the true value.
You can check this at my colab.

Thank you,
Sangmin Kim.

TensoRF/train.py

Lines 190 to 198 in 17deeed

total_loss = loss
if Ortho_reg_weight > 0:
loss_reg = tensorf.vector_comp_diffs()
total_loss += Ortho_reg_weight*loss_reg
summary_writer.add_scalar('train/reg', loss_reg.detach().item(), global_step=iteration)
if L1_reg_weight > 0:
loss_reg_L1 = tensorf.density_L1()
total_loss += L1_reg_weight*loss_reg_L1
summary_writer.add_scalar('train/reg_l1', loss_reg_L1.detach().item(), global_step=iteration)
,

Memory Required for Training

How much GPU memory is required for training? I am using RTX 2080, 11GB. I tried to train on the lego dataset using the config file provided, and I get a memory error.

  • What parameters need to be changed in the config file to train on low memory?

evaluation_path rendering

Hello Chen,
It seems that in renderer.py for function evaluation_path() you are not covering ray sampling in the same way you do separately for blender.py and llff.py data loader (in blender.py you do not use ndc_rays_blender, while in llff you do).
I think some quick fix to make args.render_path=1 be able to work for both loaders would be to go from:
if ndc_ray:
to
if ndc_ray and 'blender' not in str(type(test_dataset)):

Optimizing compute_appfeature?

Hey team,

So I'm building a renderer for TensoRF, and work on optimizing the code in this repo so that it can run in real-time.

I wrote a vectorized implementation (I think) of the TensorVMSplit.compute_appfeature() that leverages NumPy functions. I tested it on an AWS EC 2 instance (i.e. a g3s.xlarge), and it seems to run without error.

I'm just looking for feedback - is this a good direction to pursue? Do folks know if there are potentially easier/better ways to go about run this function w/o running into an out-of-memory error?

def compute_appfeature(self, xyz_sampled):
        """
        Returns the appearance feature vectors for a set of XYZ locations.

        Parameters:
            xyz_sampled: multi-dimensional Tensor. Last dim should have a shape of 3.

        Returns: multidimensional tensor. 
            Last dim will have same shape as data_dim_color
        """
        def compute_factors(idx_plane, grid_mode='plane'):
            """
            Helper function used to compute the factors used for 
            vector-matrix decomposition.

            Parameters:
                idx_plane (int): points to either the XY, XZ, or YZ planes
                grid_mode (str): specifies whether we want a 
                                 matrix/vector factor

            Returns: torch.Tensor: the factor needed for VM decomposition
            """
            grid = None
            if grid_mode == 'plane':
                grid = coordinate_plane  # defined below
            else:  # grid_mode == 'line'
                grid = coordinate_line  # defined below
            
            input_plane = self.app_plane[idx_plane].cpu()
            factor = F.grid_sample(
                input_plane,
                grid[[idx_plane]],
                align_corners=True,
            ).view(-1, *xyz_sampled.shape[:1])

            return factor

        ### MAIN CODE
        xyz_sampled = xyz_sampled.to(device="cpu")

        ...  # unchanged code

        # figure out the vector-matrix outer products, trying vectorization
        app_plane_indices = np.array(list(range(len(self.app_plane))))
        compute_VM_factors = np.vectorize(compute_factors, otypes=[torch.Tensor])

        plane_coef_point = compute_VM_factors(app_plane_indices, 'grid')  # 1D np.ndarray with 2D Tensors
        plane_coef_point = torch.cat(list(plane_coef_point)).to(device=self.device)  # 2D Tensor

        # same type of object as plane_coef_point
        line_coef_point = compute_VM_factors(app_plane_indices, 'line')  # same as above
        line_coef_point = torch.cat(list(line_coef_point)).to(device=self.device)

        return self.basis_mat((plane_coef_point * line_coef_point).T)

distance_scale 25

Hi thanks a lot for releasing the code for the nice work

distance_scale=25 is used at train and test time for rendering, but not for alpha mask update or mesh extraction. I can't seem to find discussion regarding this hyperparameter in the main text, and I wonder whether you could provide a little more explanations.

It appears that using distance_scale for mesh extraction results in noisier geometry with more floaters, while not applying distance_scale during training would lead to divergence. Thanks again.

Arguments to the model

Thank you for the code. Could you please add some comments what the arguments to the model mean? in TensorBase.

How to evaluate TensoRF on a dense RGBA grid?

Hello - I noticed there is a function getDenseAlpha() in tensorBase.py to output what the alpha values TensoRF would predict, in a dense 3D voxel grid.

I am wondering if it is a good idea to extend this function to also include outputting the RGB values for each of the cells in the grid (maybe call it getDenseRGBA())? I am thinking we could use such a grid in building a RT renderer (#7), in addition to having some kind of acceleration structure.

Do you perhaps have an idea on how this could work @apchenstu ?

Sharing pretrained weights on Hugging Face

Hello there! First of all, thank you for open-sourcing your work!

I saw that the pretrained checkpoints are hosted on OneDrive – would you be interested in sharing your models on the Hugging Face Hub?

The Hub makes it easy to freely download and upload models, and it can make models more accessible and visible to the rest of the ML community. It's good way to share useful metadata and metrics, and we also support features like TensorBoard visualizations and PapersWithCode integrations. Since models are hosted as Git repos, they're also automatically versioned with a commit history and diffs. We could even help you set up an organization (e.g. see the Facebook AI or Stanford NLP organizations).

We have a step-by-step guide that explains the process for uploading the model to the Hub, in case you're interested. We also have a library for programmatic access to uploading and downloading models, which includes features like caching for downloaded models.

Please let us know if you have any questions, and we'd be happy to guide you through the process!

Nima and the Hugging Face team

cc @osanseviero @lhoestq

About the Radiance Field factorization

Hello, this is a novel, nice work. I'm very surprised with the results and your idea.

But I have a small questions,

  1. Why did you split the Radiance Field grid into density and appearance separately? Can we design a single factored grid and regress density and appearance using MLP?

  2. Did you have any insight behind the using appearance vector(vector b_r in the paper)? As far as i know, original NeRF did not use additional appearance feature vector at all.

Thanks for sharing great code!

Setting near_far when working with ScanNet scenes

Hi @apchenstu, I am trying to train TensoRF with a ScanNet scene. I calculate the scene bounding box using the scene mesh provided in the dataset. For example, for scene0000_00 in ScanNet, the bounding box can be computed to be [[-1.0176, -1.0018, -1.0003],[11.3742, 9.7380, 4.0293]]. Although, the TensoRF performance on the test set is quite bad. The visualizations look something like this:
Selection_517

You can see that it can render the chair legs (see bottom left), but overall rendering is quite bad. I tried different far values (e.g. 5.0, 10.0, 100.0) but none of them seem to work with TensoRF.

But when I trained a NeRF model with Instant-NGP, far value of 10.0 had worked. So, I am not sure what I am missing here. Can you please advise?

Interestingly, training PSNR reaches a high value of 25, but test PSNR is very low around 9 or 10.

Thanks,
Yash

Very low speed on 3060,windows 10

I followed the command you provide,the code run successfully,but the speed is terrobily slow...

Iteration 0040: train_psnr = 15.01 test_psnr = 0.00 mse = 0.025878:   0%|      | 42/30000 [08:30<544:56:04,  65.48s/it]

And then I tried using

conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch

provided by h-OUS-e in the issues,but still got very low speed....

Iteration 00150: train_psnr = 21.94 test_psnr = 0.00 mse = 0.006579:   1%|      | 160/30000 [10:27<25:26:26,  3.07s/it]

Renderer issue

Thank you for your great work!

BTW, when I run TensoRF on llff fern data, I got an error and solved it.
I want to report this.

The error was like this:

Traceback (most recent call last):
File "train.py", line 301, in
reconstruction(args)
File "train.py", line 225, in reconstruction
prtx=f'{iteration:06d}_', N_samples=nSamples, white_bg = white_bg, ndc_ray=ndc_ray, compute_extra_metrics=False)
File "/home/asc/PycharmProjects/nerf-pytorch/venv/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/home/asc/PycharmProjects/TensoRF/renderer.py", line 38, in evaluation
idxs = list(range(0, test_dataset.all_rays.shape[0], img_eval_interval))
ValueError: range() arg 3 must not be zero

So, I modified renderer.py like this

img_eval_interval = 1 if N_vis < 0 else test_dataset.all_rays.shape[0] // N_vis
if img_eval_interval == 0:
    img_eval_interval = 1                   # add these 2 lines

And it worked.

Low speeds on a 3080ti

Hello,

Firstly thank you for sharing a very clear implementation of your work! It is great!
For some reason, my training is very slow and I was wondering if I must include something in the training command to achieve speeds as fast as the ones you show. Right now, it has been more than 15 minutes and I am on itteration 460 only, with a PSNR of 24.92 and the loading bar says 2%.

I am on Windows 10 with a 3080ti gpu. I followed the instructions for downloading and installing the packages in the conda environment as specified.

Any help is appreciated, thank you!

Missing references

Hi - hardly an issue, more some observations - you need to add kornia, lpips and tensorboard to your list of dependencies to run train.py. Also the version of PyTorch you install needs to play nicely with the installed local version of the CUDA Toolkit - if the installation fails you're pointed to a website with appropriate instructions so no need for those to be reproduced here. Thanks for sharing this fantastic work!

I can't find appearance matrix B in code

Hi, thanks to make this great code and paper! I really enjoy this.

In Tensorf paper,
Selection_523
appearance values(A_c(x)) are concatenated and then multiplied by appearance matrix B.
And then, sent this into the decoding function S for RGB color regression.

But in this code ,

def compute_appfeature(self, xyz_sampled):

I can't find the appearance matrix B.
I understand that plane_coef_point is matrix M and line_coef_point is vector v.

From this line,

return self.basis_mat((plane_coef_point * line_coef_point).T)

M and V are multiplied and then go into basis_mat which is nn.Linear(144,27).
And then this 27-dimension outcomes go into positional encoding block and Feature decoding function S.

During this process, I can't find any appearance matrix B mentioned in the paper.
Is the self.basis_mat is matrix B?
If it is not, where is matrix B and what is the self.basis_mat?

Example render command in readme gives size mismatch error.

Note that I think this is different to #2 as I'm just trying to render the examples without training them. I might be misunderstanding though.

I've downloaded the dataset and pretrained checkpoints for Synthetic Nerf.

python train.py --config configs/lego.txt --ckpt checkpoints/lego.th --render_only 1 --render_test 1

gives

size mismatch for basis_mat.weight: copying a param with shape torch.Size([27, 288]) from checkpoint, the shape in current model is torch.Size([27, 864]).

Module plyfile missing from the dependencies?

Using the conda and pip setup commands from the readme leaves me with this error:

(TensoRF) snellius paulm@gcn37 11:52 ~/c/TensoRF$ python train.py --config configs/steps_with_stuff.txt 
Traceback (most recent call last):
  File "train.py", line 9, in <module>
    from renderer import *
  File "/gpfs/home4/paulm/c/TensoRF/renderer.py", line 5, in <module>
    from utils import *
  File "/gpfs/home4/paulm/c/TensoRF/utils.py", line 159, in <module>
    import plyfile
ModuleNotFoundError: No module named 'plyfile'

Add with pip install plyfile seems to fix the issues.

Error when running with TensorCP

Hello authors, thank you for your great work.

I am trying to train a model on the lego dataset with TensorCP, but it is not working. It trains, but the PSNR does not increase above 9, and then it errors out when updating the alpha mask:

...
initial TV_weight density: 0.0 appearance: 0.0
Iteration 02000: train_psnr = 9.34 test_psnr = 0.00 mse = 0.121434:   7%|█████▎                                                                         | 2000/30000 [00:31<07:25, 62.87it/s]
Traceback (most recent call last):
  File "train.py", line 301, in <module>
    reconstruction(args)
  File "train.py", line 234, in reconstruction
    new_aabb = tensorf.updateAlphaMask(tuple(reso_mask))
  File "/users/lukemk/miniconda3/envs/new/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/tritonhomes/lukemk/projects/experiments/active/TensoRF/models/tensorBase.py", line 330, in updateAlphaMask
    xyz_min = valid_xyz.amin(0)
IndexError: amin(): Expected reduction dim 0 to have non-zero size.

For context, I can run with TensorVMSplit without any issues (and I get the expected PSNR). I probably just missed some parameter that needs to be passed to make TensorCP work.

Thanks again for your work!

questions about hyper params

Hi, I used the NGP way for preparing my own data, but the results are blurry after training. Could you please provides some hints about how to set the params in scene_bbox and near_far for specific dataset? Thanks

Questions about the DTU dataset.

Hi Anpei! Thank you very much for the outstanding work.

I have some problems training the DTU dataset. For the dtu_83 scene, I mask out the background and then use 58 images for training and 6 for testing. The testing results are poor as below.

009999_000
009999_003

I notice that in the opt.py file, 'dtu' is one of the choices of the dataset_name argument. Have you trained on the DTU dataset before? Could you give some instructions?

near_far and bbox for DTU

Has anyone tested it on DTU? What values should be used for near_far and bbox for DTU? I have tried several values, and got abnormal results everytime.

About real-time rendering

Thanks for your great work! I find that you have providec SHRender() in tensorBase.py. Is it possible to cash xyz_features and SH parameters like Plenoxels? Thanks in advance!

Implementation of eval()

Hi,
I cannot find the implementation of function eval():
eval(args.model_name)

Can you give me some reference to the related code?

Floaters in free space with limited input data

Hi Anpei,

I ran a modified version of TensoRF on my own dataset only with 12~20 images as input. The convergence and the quality are really nice but I see quite a lot of floaters in empty space (most seem to stick to the boundary of the bounding box). In addition, the boundary of the object is quite noisy in some novel views. I wonder if you have met with these problems before and if you have any suggestions to fix this problem? Thanks in advance!

Calculation of CP decomposition and other questions

Hello,

I had a question regarding how do you calculate the CP decomposition? Is the way you calculate it in your code similar to the SVD method provided in numpy or torch?

Another question is does the geometry grid or density get calculated without any optimization? In the code it doesn’t seem to be passing ong through an MLP and I was wondering how is the density learned by this model?

thanks for the amazing work!

What is test_dataset.render_path?

Hello, I wanted to ask about a small implementation detail - what is stored in the test_dataset.render_path variable? And how is it initialized?

Where this becomes an issue: when I pass --render_only 1 --render_test 1 --render_path 1 to train.py (happy to give the full command if needed), it raises an AttributeError on line 84:

c2ws = test_dataset.render_path

I think it because test_dataset.render_path is None for me.

I can at least see that test_dataset.render_path is referenced in train.py in both the reconstruction() and render_test() functions in train.py, but I could not locate where exactly it is initialized in any of the dataLoader classes.

Thank you for any insight on this.

How to decide the value of "scene_bbox" in other datasets?

I want to apply this model to other datasets like ShapeNet, but I don't know the exact size of the scene bounding box and where the box should be placed because it seems not to be given in the dataset. I wonder how I get the value of the parameter “scene_bbox” in other datasets?

How to build a real-time renderer with TensoRF?

Hello @apchenstu,

I was wondering if it might be possible to render 3D models using TensoRF in real-time? I don't know if this is the direction your team was planning to go in with your paper. But I am curious if it would be possible to build something similar to what Google built for SNeRG: website link

Thanks for sharing this work with the community!

Wrong link to paper in README

Great work and an inpsiring read!

Just a small pointer - the link points to the MVSNeRF paper instead of yours...

about noisy results from own dataset

Hi, I have created my own dataset but the results looks like this, do you have any idea why so?

image

The way I prepared my data is to capture ~200 images 360 degrees around a small object, then run colmap2nerf and split into train and test set. I probably need to segment out the object itself like tanks and temple dataset but didn't do so due to time constraint, but I used a clean white background. Training process takes around 1hour, and reported train psnr = 24 test psnr = 12 mse = 0.003

thanks!

"Out of memory: Killed process" when I trained the Tank&Template dataset.

Hello! For NeRF and NVSF datasets I can train normally, but when I do Tank&Template training, it keeps reporting killed, and by checking the logs, I found that it is because of "out of memory". Later, by checking the memory usage through "top", I found that the memory is indeed overflowing, but the gpu memory is still largely remaining.

RuntimeError shapes mismatch

Great work!
I just wanted to try it out on the NeRF synthetic Lego dataset, but got a RuntimeError.

Running python train.py --expname lego --datadir ~/data/nerf/nerf_synthetic/lego yields

Traceback (most recent call last):
  File "train.py", line 303, in <module>
    reconstruction(args)
  File "train.py", line 169, in reconstruction
    rgb_map, alphas_map, depth_map, weights, uncertainty = renderer(rays_train, tensorf, chunk=args.batch_size,
  File "/home/ubuntu/workspace/TensoRF/renderer.py", line 16, in OctreeRender_trilinear_fast
    rgb_map, depth_map = tensorf(rays_chunk, is_train=is_train, white_bg=white_bg, ndc_ray=ndc_ray, N_samples=N_samples)
  File "/home/ubuntu/anaconda3/envs/TensoRF/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/ubuntu/workspace/TensoRF/models/tensorBase.py", line 449, in forward
    valid_rgbs = self.renderModule(xyz_sampled[app_mask], viewdirs[app_mask], app_features)
  File "/home/ubuntu/anaconda3/envs/TensoRF/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/ubuntu/workspace/TensoRF/models/tensorBase.py", line 109, in forward
    rgb = self.mlp(mlp_in)
  File "/home/ubuntu/anaconda3/envs/TensoRF/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/ubuntu/anaconda3/envs/TensoRF/lib/python3.8/site-packages/torch/nn/modules/container.py", line 141, in forward
    input = module(input)
  File "/home/ubuntu/anaconda3/envs/TensoRF/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/ubuntu/anaconda3/envs/TensoRF/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 103, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (4x102 and 105x128)

Did I miss something?

Potential error in computing TV loss?

Hi, thanks for the great work! I'm reading the code and find something weird. In these lines, you're computing the TV loss. At line203, loss_tv is first computed on density components, and then added to total_loss at line 204. However, at line207, you add the appearance component TV loss to the previous density TV loss:

loss_tv = loss_tv + tensorf.TV_loss_app(tvreg)*TV_weight_app

and add them together to the total_loss at line208:

total_loss = total_loss + loss_tv

If I understand correctly, you will punish density TV loss twice by doing so. While I understand we can adjust its loss weight to mitigate this issue, maybe you want to fix this.

Another problem is, you are doing decay of TV loss weight by TV_weight_density *= lr_factor. Did you find this better compared to e.g. using a consistent TV loss weight?

Multiple camera support

Hi,

Great work!

I am wondering for my own dataset, is it possible to support different intrinsic for each camera?

In instant-ngp, we are able to do it by modify the transforms.json: NVlabs/instant-ngp#797

In your code I found that by design it is for only one camera, for example, the read_meta function

def read_meta(self):
just assume that we only have one kind of intrinsic.

Any suggestions? Thanks a lot!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.