Giter VIP home page Giter VIP logo

bionerf's Introduction

BioNeRF

This is the official implementation of BioNeRF.

BioNeRF (Biologically Plausible Neural Radiance Fields) extends NeRF by implementing a cognitive-inspired mechanism that fuses inputs from multiple sources into a memory-like structure, thus improving the storing capacity and extracting more intrinsic and correlated information. BioNeRF also mimics a behavior observed in pyramidal cells concerning contextual information, in which the memory is provided as the context and combined with the inputs of two subsequent blocks of dense layers, one responsible for producing the volumetric densities and the other the colors used to render the novel view. The method outperformed recent approaches and achieves state-of-the-art results for synthesizing novel views of complex scenes. Here are some videos generated by this repository (pre-trained models are provided below):

Code release for:

BioNeRF: Biologically Plausible Neural Radiance Fields

Leandro A. Passos, Douglas Rodrigues, Danilo Jodas, Kelton A. P. Costa, Ahsan Adeel, JoΓ£o Paulo Papa

πŸš€ Project page

πŸ“° Paper

nerfstudioIntegration with NerfStudio for easier visualization and development

This code is based on a PyTorch implementation of NeRF.

Installation

git clone https://github.com/Leandropassosjr/BioNeRF.git
cd BioNeRF
pip install -r requirements.txt
Dependencies (click to expand)

Dependencies

  • numpy
  • torch
  • torchvision
  • imageio
  • imageio
  • matplotlib
  • configargparse
  • tensorboard
  • tqdm
  • opencv-python
  • torchmetrics

The LLFF data loader requires ImageMagick.

How To Run?

Download synthetic Blender and LLFF scenes from here. Place the downloaded dataset according to the following directory structure:

β”œβ”€β”€ configs                                                                                                       
β”‚Β Β  β”œβ”€β”€ ...                                                                                     
β”‚Β Β                                                                                              
β”œβ”€β”€ data                                                                                                                                                                                                       
β”‚Β Β  β”œβ”€β”€ nerf_llff_data                                                                                                  
β”‚Β Β  β”‚Β Β  └── fern                                                                                                                             
β”‚Β Β  β”‚Β   └── flower  # downloaded llff dataset                                                                                  
β”‚Β Β  β”‚Β   └── horns   # downloaded llff dataset
|   |   └── ...
|   β”œβ”€β”€ nerf_synthetic
|   |   └── lego
|   |   └── ship    # downloaded synthetic dataset
|   |   └── ...

To train a 400x400 ship scene on BioNeRF:

python run_bionerf.py --config configs/ship.txt

After training you can find the following video at logs/blender_paper_ship/blender_paper_ship_spiral_400000_rgb.mp4.


To train BioNeRF on different datasets:

python run_bionerf.py --config configs/{DATASET}.txt

replace {DATASET} with trex | horns | flower | fortress | lego | etc.


To test BioNeRF trained on different datasets:

python run_bionerf.py --config configs/{DATASET}.txt --render_only --render_test --generate_samples

replace {DATASET} with trex | horns | flower | fortress | lego | etc.

Pre-trained Models

You can download the pre-trained models here. Place the downloaded directory in ./logs in order to test it later. See the following directory structure for an example:

β”œβ”€β”€ logs 
β”‚Β Β  β”œβ”€β”€ fern_test
β”‚Β Β  β”œβ”€β”€ flower_test  # downloaded logs
β”‚   β”œβ”€β”€ trex_test    # downloaded logs

Method

Here is an overview pipeline for BioNeRF, we will walk through each component in this guide.

Positional Feature Extraction

The first step consists of feeding two neural models simultaneously, namely $M_{\Delta}$ and $M_c$, with the camera positional information. The output of these models encodes the positional information from the input image. Although the input is the same, the neural models do not share weights and follow a different flow in the next steps.

Cognitive Filtering

This step performs a series of operations, called filters, that work on the embeddings coming from the previous step. There are four filters this step derives: density, color, memory, and modulation.

Memory Updating

Updating the memory requires the implementation of a mechanism capable of obliterating trivial information, which is performed using the memory filter (Step 3.1 in the figure). Fist, one needs to compute a signal modulation $\mu$, for further introducing new experiences in the memory $\Psi$ through the modulating variable $\mu$ using a $\textit{tanh}$ function (Step 3.2 in the figure).

Contextual Inference

This step is responsible for adding contextual information to BioNeRF. Two new embeddings are generated, i.e., ${h}^{\prime}_\Delta$ and ${h}^{\prime}_c$ based on density and color filters, respectively (Step 4 in the figure), which further feed two neural models, i.e., $M^\prime_\Delta$ and $M^\prime_c$. Subsequently, $M^\prime_\Delta$ outputs the volume density, while color information is predicted by $M^{\prime}_c$, further used to compute the final predicted pixel information and the loss function.

Benchmarks

Blender (synthetic)

drums materials ficus ship mic chair lego hotdog AVG
PSNR 25.66 29.74 29.56 29.57 33.38 34.63 31.82 37.23 31.45
SSIM 0.927 0.957 0.965 0.874 0.978 0.977 0.963 0.980 0.953
LPIPS 0.047 0.018 0.017 0.068 0.018 0.011 0.016 0.010 0.026

Ground-truth (top) and synthetic view (bottom) images generated by BioNeRF regarding four Realistic Blender dataset’s scenes.

LLFF (real)

Fern Flower Fortress Horns Leaves Orchids Room T-Rex AVG
PSNR 25.17 27.89 32.34 27.99 22.23 20.80 30.75 27.56 27.01
SSIM 0.837 0.873 0.914 0.882 0.796 0.714 0.911 0.911 0.861
LPIPS 0.093 0.055 0.025 0.070 0.103 0.122 0.029 0.044 0.068

Ground-truth (top) and synthetic view (bottom) images generated by BioNeRF regarding four LLFF dataset’s scenes.

Citation

@article{passos2024bionerf,
  title={BioNeRF: Biologically Plausible Neural Radiance Fields for View Synthesis},
  author={Passos, Leandro A and Rodrigues, Douglas and Jodas, Danilo and Costa, Kelton AP, Adeel, Ahsan and Papa, Jo{\~a}o Paulo},
  journal={arXiv preprint arXiv:2402.07310},
  year={2024}
}

bionerf's People

Contributors

leandropassosjr avatar

Stargazers

 avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.