Giter VIP home page Giter VIP logo

nerfstudio-project / nerfstudio Goto Github PK

View Code? Open in Web Editor NEW
8.5K 107.0 1.1K 104.4 MB

A collaboration friendly studio for NeRFs

Home Page: https://docs.nerf.studio

License: Apache License 2.0

Python 90.76% Shell 0.40% JavaScript 7.15% HTML 0.14% SCSS 0.37% TypeScript 0.89% Dockerfile 0.29%
nerf pytorch 3d 3d-graphics 3d-reconstruction computer-vision deep-learning machine-learning photogrammetry

nerfstudio's Introduction

Documentation Status PyPI version Test Status License

nerfstudio

A collaboration friendly studio for NeRFs

documentation viewer colab

About

It’s as simple as plug and play with nerfstudio!

Nerfstudio provides a simple API that allows for a simplified end-to-end process of creating, training, and testing NeRFs. The library supports a more interpretable implementation of NeRFs by modularizing each component. With more modular NeRFs, we hope to create a more user-friendly experience in exploring the technology.

This is a contributor-friendly repo with the goal of building a community where users can more easily build upon each other's contributions. Nerfstudio initially launched as an opensource project by Berkeley students in KAIR lab at Berkeley AI Research (BAIR) in October 2022 as a part of a research project (paper). It is currently developed by Berkeley students and community contributors.

We are committed to providing learning resources to help you understand the basics of (if you're just getting started), and keep up-to-date with (if you're a seasoned veteran) all things NeRF. As researchers, we know just how hard it is to get onboarded with this next-gen technology. So we're here to help with tutorials, documentation, and more!

Have feature requests? Want to add your brand-spankin'-new NeRF model? Have a new dataset? We welcome contributions! Please do not hesitate to reach out to the nerfstudio team with any questions via Discord.

Have feedback? We'd love for you to fill out our Nerfstudio Feedback Form if you want to let us know who you are, why you are interested in Nerfstudio, or provide any feedback!

We hope nerfstudio enables you to build faster πŸ”¨ learn together πŸ“š and contribute to our NeRF community πŸ’–.

Sponsors

Sponsors of this work includes Luma AI and the BAIR commons.

Luma AI BAIR

Quickstart

The quickstart will help you get started with the default vanilla NeRF trained on the classic Blender Lego scene. For more complex changes (e.g., running with your own data/setting up a new NeRF graph), please refer to our references.

1. Installation: Setup the environment

Prerequisites

You must have an NVIDIA video card with CUDA installed on the system. This library has been tested with version 11.8 of CUDA. You can find more information about installing CUDA here

Create environment

Nerfstudio requires python >= 3.8. We recommend using conda to manage dependencies. Make sure to install Conda before proceeding.

conda create --name nerfstudio -y python=3.8
conda activate nerfstudio
pip install --upgrade pip

Dependencies

Install PyTorch with CUDA (this repo has been tested with CUDA 11.7 and CUDA 11.8) and tiny-cuda-nn. cuda-toolkit is required for building tiny-cuda-nn.

For CUDA 11.8:

pip install torch==2.1.2+cu118 torchvision==0.16.2+cu118 --extra-index-url https://download.pytorch.org/whl/cu118

conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit
pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

See Dependencies in the Installation documentation for more.

Installing nerfstudio

Easy option:

pip install nerfstudio

OR if you want the latest and greatest:

git clone https://github.com/nerfstudio-project/nerfstudio.git
cd nerfstudio
pip install --upgrade pip setuptools
pip install -e .

OR if you want to skip all installation steps and directly start using nerfstudio, use the docker image:

See Installation - Use docker image.

2. Training your first model!

The following will train a nerfacto model, our recommended model for real world scenes.

# Download some test data:
ns-download-data nerfstudio --capture-name=poster
# Train model
ns-train nerfacto --data data/nerfstudio/poster

If everything works, you should see training progress like the following:

image

Navigating to the link at the end of the terminal will load the webviewer. If you are running on a remote machine, you will need to port forward the websocket port (defaults to 7007).

image

Resume from checkpoint / visualize existing run

It is possible to load a pretrained model by running

ns-train nerfacto --data data/nerfstudio/poster --load-dir {outputs/.../nerfstudio_models}

Visualize existing run

Given a pretrained model checkpoint, you can start the viewer by running

ns-viewer --load-config {outputs/.../config.yml}

3. Exporting Results

Once you have a NeRF model you can either render out a video or export a point cloud.

Render Video

First we must create a path for the camera to follow. This can be done in the viewer under the "RENDER" tab. Orient your 3D view to the location where you wish the video to start, then press "ADD CAMERA". This will set the first camera key frame. Continue to new viewpoints adding additional cameras to create the camera path. We provide other parameters to further refine your camera path. Once satisfied, press "RENDER" which will display a modal that contains the command needed to render the video. Kill the training job (or create a new terminal if you have lots of compute) and run the command to generate the video.

Other video export options are available, learn more by running

ns-render --help

Generate Point Cloud

While NeRF models are not designed to generate point clouds, it is still possible. Navigate to the "EXPORT" tab in the 3D viewer and select "POINT CLOUD". If the crop option is selected, everything in the yellow square will be exported into a point cloud. Modify the settings as desired then run the command at the bottom of the panel in your command line.

Alternatively you can use the CLI without the viewer. Learn about the export options by running

ns-export pointcloud --help

4. Using Custom Data

Using an existing dataset is great, but likely you want to use your own data! We support various methods for using your own data. Before it can be used in nerfstudio, the camera location and orientations must be determined and then converted into our format using ns-process-data. We rely on external tools for this, instructions and information can be found in the documentation.

Data Capture Device Requirements ns-process-data Speed
πŸ“· Images Any COLMAP 🐒
πŸ“Ή Video Any COLMAP 🐒
🌎 360 Data Any COLMAP 🐒
πŸ“± Polycam IOS with LiDAR Polycam App πŸ‡
πŸ“± KIRI Engine IOS or Android KIRI Engine App πŸ‡
πŸ“± Record3D IOS with LiDAR Record3D app πŸ‡
πŸ“± Spectacular AI IOS, OAK, others App / sai-cli πŸ‡
πŸ–₯ Metashape Any Metashape πŸ‡
πŸ–₯ RealityCapture Any RealityCapture πŸ‡
πŸ–₯ ODM Any ODM πŸ‡
πŸ‘“ Aria Aria glasses Project Aria πŸ‡
πŸ›  Custom Any Camera Poses πŸ‡

5. Advanced Options

Training models other than nerfacto

We provide other models than nerfacto, for example if you want to train the original nerf model, use the following command

ns-train vanilla-nerf --data DATA_PATH

For a full list of included models run ns-train --help.

Modify Configuration

Each model contains many parameters that can be changed, too many to list here. Use the --help command to see the full list of configuration options.

ns-train nerfacto --help

Tensorboard / WandB / Viewer

We support four different methods to track training progress, using the viewertensorboard, Weights and Biases, and ,Comet. You can specify which visualizer to use by appending --vis {viewer, tensorboard, wandb, comet viewer+wandb, viewer+tensorboard, viewer+comet} to the training command. Simultaneously utilizing the viewer alongside wandb or tensorboard may cause stuttering issues during evaluation steps. The viewer only works for methods that are fast (ie. nerfacto, instant-ngp), for slower methods like NeRF, use the other loggers.

Learn More

And that's it for getting started with the basics of nerfstudio.

If you're interested in learning more on how to create your own pipelines, develop with the viewer, run benchmarks, and more, please check out some of the quicklinks below or visit our documentation directly.

Section Description
Documentation Full API documentation and tutorials
Viewer Home page for our web viewer
πŸŽ’ Educational
Model Descriptions Description of all the models supported by nerfstudio and explanations of component parts.
Component Descriptions Interactive notebooks that explain notable/commonly used modules in various models.
πŸƒ Tutorials
Getting Started A more in-depth guide on how to get started with nerfstudio from installation to contributing.
Using the Viewer A quick demo video on how to navigate the viewer.
Using Record3D Demo video on how to run nerfstudio without using COLMAP.
πŸ’» For Developers
Creating pipelines Learn how to easily build new neural rendering pipelines by using and/or implementing new modules.
Creating datasets Have a new dataset? Learn how to run it with nerfstudio.
Contributing Walk-through for how you can start contributing now.
πŸ’– Community
Discord Join our community to discuss more. We would love to hear from you!
Twitter Follow us on Twitter @nerfstudioteam to see cool updates and announcements
Feedback Form We welcome any feedback! This is our chance to learn what you all are using Nerfstudio for.

Supported Features

We provide the following support structures to make life easier for getting started with NeRFs.

If you are looking for a feature that is not currently supported, please do not hesitate to contact the Nerfstudio Team on Discord!

  • πŸ”Ž Web-based visualizer that allows you to:
    • Visualize training in real-time + interact with the scene
    • Create and render out scenes with custom camera trajectories
    • View different output types
    • And more!
  • ✏️ Support for multiple logging interfaces (Tensorboard, Wandb), code profiling, and other built-in debugging tools
  • πŸ“ˆ Easy-to-use benchmarking scripts on the Blender dataset
  • πŸ“± Full pipeline support (w/ Colmap, Polycam, or Record3D) for going from a video on your phone to a full 3D render.

Built On

tyro logo
  • Easy-to-use config system
  • Developed by Brent Yi
tyro logo
  • Library for accelerating NeRF renders
  • Developed by Ruilong Li

Citation

You can find a paper writeup of the framework on arXiv.

If you use this library or find the documentation useful for your research, please consider citing:

@inproceedings{nerfstudio,
	title        = {Nerfstudio: A Modular Framework for Neural Radiance Field Development},
	author       = {
		Tancik, Matthew and Weber, Ethan and Ng, Evonne and Li, Ruilong and Yi, Brent
		and Kerr, Justin and Wang, Terrance and Kristoffersen, Alexander and Austin,
		Jake and Salahi, Kamyar and Ahuja, Abhik and McAllister, David and Kanazawa,
		Angjoo
	},
	year         = 2023,
	booktitle    = {ACM SIGGRAPH 2023 Conference Proceedings},
	series       = {SIGGRAPH '23}
}

Contributors

nerfstudio's People

Contributors

akristoffersen avatar brentyi avatar chungmin99 avatar cvachha avatar decrispell avatar dependabot[bot] avatar ethanweber avatar evonneng avatar f-dy avatar hturki avatar isach avatar jake-austin avatar jb-ye avatar jkulhanek avatar kerrj avatar kevinddchen avatar liruilong940607 avatar machenmusik avatar maturk avatar mcallisterdavid avatar mxbonn avatar nikmo33 avatar origamiman72 avatar pablovela5620 avatar ponimatkin avatar sauravmaheshkar avatar tancik avatar terrancewang avatar the-cob avatar zunhammer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nerfstudio's Issues

Set min and max resolution from viewer

Rather than setting in the config, the user should be able to set the minimum and maximum render resolution from the viewer ui.

  • add dat gui slider with min max values
  • add logic to read value back to server
  • update backend-side code to reflect max value

Refactoring (mostly for dataloader)

  • cache the dataloader with pickle (for now). later, maybe add proper serialization
  • scale and shift the friends dataset so it's centered about the origin
  • add config to sample from only the mask or not
    ImageDataset classes will return batches of images and masks, etc.
    PixelSamplers will choose which pixels to use from the image datasets.
    

Auto doc compilation during run_actions

Pushing to git will now fail if there are warnings in the doc compilation. Can we add a doc compilation step in our run_action.py script so that we can catch those warning before pushing. This likely involves calling make clean and make html in the docs folder.

Vanilla NeRF doesn't work in viewer

Perhaps an issue the recent changes that allow you to switch outputs?

Traceback (most recent call last):
  File "scripts/run_train.py", line 221, in main
    launch(
  File "scripts/run_train.py", line 161, in launch
    main_func(local_rank=0, world_size=1, config=config)
  File "scripts/run_train.py", line 128, in _train
    trainer.train()
  File "/projects/pyrad/pyrad/engine/trainer.py", line 118, in train
    self.visualizer_state.update_scene(step, self.graph)
  File "/projects/pyrad/pyrad/viewer/server/viewer_utils.py", line 92, in update_scene
    self._render_image_in_viewer(graph)
  File "/projects/pyrad/pyrad/utils/profiler.py", line 34, in wrapper
    ret = func(*args, **kwargs)
  File "/projects/pyrad/pyrad/viewer/server/viewer_utils.py", line 226, in _render_image_in_viewer
    image_output = outputs[output_type].cpu().numpy() * 255
KeyError: 'rgb'

Camera modules documentation

Update the ipynb camera visualization with the following:

  • Move visualization commands into camera class and out of notebook
  • Add description and figure for coordinate system used in pyrad
  • Add more descriptions to visualization
  • Improve ray visualization
  • Maybe visualize frustums

Encoders documentation

  • Add TLDR table for the various encodings
  • Add more descriptions and links for each encoding method

Use three.js OrbitControls

  • change up direction to be correct
  • updating the damping factor and remove settings unrelated to existing TrackballControls

Implement TensoRF Graph

All of the components needed should already be implemented. Just need to create a graph/config and benchmark against the paper.

Refactor loss into loss and metrics

We want to refactor how computing losses and metrics work. Currently get_loss_dict returns a dictionary of losses. These losses are then combined in get_aggregated_loss_dict using coefficients defined in the config. This workflow is not the most transparent, ie. if I add a new loss, I then need to know that I must update the configs accordingly.

Proposal:
Change get_loss_dict(outputs, batch) to get_loss(outputs, batch, metrics=None, coefficients=None) -> float and get_metrics_dict(outputs, batch) -> dict.
Remove get_aggregated_loss_dict

Relevant code (would need to update for all models):
https://github.com/plenoptix/pyrad/blob/56661b5d9aa8adfec9cad60bce53036cb0ceca43/pyrad/graphs/vanilla_nerf.py#L143-L148

https://github.com/plenoptix/pyrad/blob/b0594935af747ba5487aee4816e1cbcdfc408967/pyrad/graphs/base.py#L162-L174

Add development documentation

We should outline the steps to setup development environment and how to run the code checks. Will be useful/necessary when others want to contribute.

Add "start" and "pause" training in viewer

Allow user to pause training for smoother rendering.

  • add start/stop button in viewer + add signalling logic across server
  • figure out how to stop training but keep rendering (thinking the pause logic should keep looping in: _is_render_step)

Improving raw data loading

The goals of this PR are the following:

  1. Change raw data loaders to use classes instead of functions (e.g., functions like this should be classes. This will help with cleanliness and handling new data types.
  2. "dataset_format" should be an attribute of the new classes (see above).
  3. get_dataset_inputs() should not have if/else checks on dataset_format. Rather, it should know classes are implemented (see above) and choose appropriately.

Bonus

  • a good way to cache dataset inputs to avoid having to read large COLMAP binaries, etc., which can take a while. Some code attempts to do this already, but it's too hacky to be used by users.

Vanilla NeRF CUDA error during ray sampling

When running python scripts/run_train.py on vanilla NeRF the following error is raised. The gather indices are out of bounds.

  File "/projects/pyrad/pyrad/engine/trainer.py", line 205, in test_image
    outputs = self.graph.get_outputs_for_camera_ray_bundle(camera_ray_bundle)
  File "/home/tancik/miniconda3/envs/pyrad/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/projects/pyrad/pyrad/graphs/base.py", line 188, in get_outputs_for_camera_ray_bundle
    outputs = self.forward_after_ray_generator(ray_bundle)
  File "/projects/pyrad/pyrad/graphs/base.py", line 145, in forward_after_ray_generator
    outputs = self.get_outputs(intersected_ray_bundle)
  File "/projects/pyrad/pyrad/graphs/vanilla_nerf.py", line 121, in get_outputs
    ray_samples_pdf = self.sampler_pdf(ray_bundle, ray_samples_uniform, weights_coarse)
  File "/home/tancik/miniconda3/envs/pyrad/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/projects/pyrad/pyrad/graphs/modules/ray_sampler.py", line 47, in forward
    ray_samples = self.generate_ray_samples(*args, **kwargs)
  File "/home/tancik/miniconda3/envs/pyrad/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/projects/pyrad/pyrad/graphs/modules/ray_sampler.py", line 324, in generate_ray_samples
    cdf_g1 = torch.gather(cdf, -1, above)
RuntimeError: CUDA error: device-side assert triggered

This is likely caused by #116
@liruilong940607

Rename "Graph" class to "Renderer"

This is a major refactoring, but we should change "Graph" to a name that is less confusing for first-time users attempting to understand our code. "Graph" has been too often confused with "computation graph".

Tricky thing about the F.grid_sample

Issue of the F.grid_sample: padding_mode==zeros means padding zero voxel values outside the grid. a query point that is slightly outside the grid would be interpolated by the voxel values on the boarder of the grid and the outside zero voxels. So it gives non-zero value at regions slightly outside the grid.

Toy code to show:

grid = torch.ones((1, 1, 128, 128, 128))
positions = torch.tensor([[0.5, 1.004, 0.5]])
values = F.grid_sample(
    grid,
    positions.view(1, -1, 1, 1, 3),
    align_corners=True,
    padding_mode="zeros",
)
print(values.flatten())  # >> 0.7460

Relevant code in our code base:

https://github.com/plenoptix/pyrad/blob/91ef54963c43beb34f13301f8496faf2f0de8a2e/pyrad/fields/occupancy_fields/occupancy_grid.py#L131

Spatial distortion documention

  • Create ipynb that visualizes spatial distortion method
  • Include description when different spatial distortion methods should be used

Need better way to handle adding colormap on output images for Visualizer

I added some func to handle the colormap stuff
https://github.com/plenoptix/pyrad/blob/069cf2c40fb3ab68c483501f18713992b3c00d8a/pyrad/graphs/instant_ngp.py#L148-L162

And set it as a base class thing to get everything to work:
https://github.com/plenoptix/pyrad/blob/069cf2c40fb3ab68c483501f18713992b3c00d8a/pyrad/graphs/base.py#L139-L145

But i dont think this is best way to handle, so need to figure out a more robust way of handling this across implementations

This is where the function is referenced in visualizer code:
https://github.com/plenoptix/pyrad/blob/069cf2c40fb3ab68c483501f18713992b3c00d8a/pyrad/viewer/server/viewer_utils.py#L79

RFFEncoding initialization

The initialized b_matrix at this line wouldn't be saved together with the model if it is not a buffer or parameter. You might want to consider adding self.register_buffer to it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.