y-u-a-n-l-i / climate_nerf Goto Github PK
View Code? Open in Web Editor NEWThis is the official repo for PyTorch implementation of paper "ClimateNeRF: Extreme Weather Synthesis in Neural Radiance Field", ICCV 2023.
License: MIT License
This is the official repo for PyTorch implementation of paper "ClimateNeRF: Extreme Weather Synthesis in Neural Radiance Field", ICCV 2023.
License: MIT License
DATA_ROOT=/datasets/TanksAndTempleBG/Playground
SEM_CONF=/mmsegmentation/pspnet_r50-d8_4xb2-40k_cityscapes-512x1024.py
SEM_CKPT=/mmsegmentation/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth
CKPT=ckpts/tnt/playground/epoch=79_slim.ckpt
Congrats to your great work!
I am learning from this project and there is something I cannot understand. Is the ray sampled from the NDC space or directly from the world coordinate space? According to get_rays()
function in ray_utils.py
, it seems that the rays are not transformed into NDC space . If it is always the world coordinate space, what is the unit of the rays_t
? Or its meaning. (which approximately ranges from 0.0 to 13.0 using kitti360 dataset)
Thanks for your reply.
Can you please provide the version of three mmlab modules: mmengine,mmseg,mmcv. I met some problem when initial the segmentation model.
Thanks for your interesting work. I'm going to train using your sample dataset (TanksAndTempleBG), but I got the "ModuleNotFoundError: No module named 'vren' ". I could not find "vren" module to install.
(climatenerf) myuser@dell-3660:~/PycharmProjects/Climate_NeRF/scripts/tanks$ bash train.sh
/datasets/TanksAndTempleBG/Train
/mmsegmentation/ckpts/segformer_mit-b5_8xb1-160k_cityscapes-1024x1024.py
/mmsegmentation/ckpts/segformer_mit-b5_8x1_1024x1024_160k_cityscapes_20211206_072934-87a052ec.pth
ckpts/tnt/train/epoch=79_slim.ckpt
Traceback (most recent call last):
File "/home/myuser/PycharmProjects/Climate_NeRF/train.py", line 20, in <module>
from models.networks import NGP
File "/home/myuser/PycharmProjects/Climate_NeRF/models/networks.py", line 6, in <module>
import vren
ModuleNotFoundError: No module named 'vren'
Traceback (most recent call last):
File "/home/myuser/PycharmProjects/Climate_NeRF/render.py", line 10, in <module>
from models.networks import NGP
File "/home/myuser/PycharmProjects/Climate_NeRF/models/networks.py", line 6, in <module>
import vren
ModuleNotFoundError: No module named 'vren'
Traceback (most recent call last):
File "/home/myuser/PycharmProjects/Climate_NeRF/render.py", line 10, in <module>
from models.networks import NGP
File "/home/myuser/PycharmProjects/Climate_NeRF/models/networks.py", line 6, in <module>
import vren
ModuleNotFoundError: No module named 'vren'
Traceback (most recent call last):
File "/home/myuser/PycharmProjects/Climate_NeRF/render.py", line 10, in <module>
from models.networks import NGP
File "/home/myuser/PycharmProjects/Climate_NeRF/models/networks.py", line 6, in <module>
import vren
ModuleNotFoundError: No module named 'vren'
Thank you very much for the author's sharing.
But may I ask if this code can be replicated using NVIDIA RTX 3080Ti with 12GB of memory?
Can I run this project on my PC with a Windows10 ? Have you tested it on Windows? Thanks for your answer.
I could successfully run your code using your dataset, however, I'm trying to run it with my colmap dataset, but I'm getting this error. It happens for Garden data too.
GridEncoding for spital: Nmin=16 b=1.58740 F=2 T=2^19 L=16
GridEncoding for RGB: Nmin=16 b=1.25057 F=2 T=2^19 L=32
rgb_input_dim: 80
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Load segmentation model from /home/user/PycharmProjects/Climate_NeRF/mseg_ckps/segformer_mit-b5_8x1_1024x1024_160k_cityscapes_20211206_072934-87a052ec.pth
Loads checkpoint by local backend from path: /home/user/PycharmProjects/Climate_NeRF/mseg_ckps/segformer_mit-b5_8x1_1024x1024_160k_cityscapes_20211206_072934-87a052ec.pth
scene up tensor([ 0.1646, -0.9864, 0.0019])
scene scale 4.552233925425851
Loading 100 train images ...
100%|████████████████████████████████████████████████████| 100/100 [00:47<00:00, 2.11it/s]
Load segmentation model from /home/user/PycharmProjects/Climate_NeRF/mseg_ckps/segformer_mit-b5_8x1_1024x1024_160k_cityscapes_20211206_072934-87a052ec.pth
Loads checkpoint by local backend from path: /home/user/PycharmProjects/Climate_NeRF/mseg_ckps/segformer_mit-b5_8x1_1024x1024_160k_cityscapes_20211206_072934-87a052ec.pth
scene up tensor([ 0.1646, -0.9864, 0.0019])
scene scale 4.552233925425851
Loading 15 test images ...
100%|██████████████████████████████████████████████████| 15/15 [00:07<00:00, 2.14it/s]
Loading 115 camera path ...
100%|██████████████████████████████████████████████████| 115/115 [00:00<00:00, 4512.58it/s]
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 0%| | 0/1000 [00:00<?, ?it/s]
../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [34,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [34,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [34,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [34,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [34,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [34,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1141: indexSelectLargeIndex: block: [34,0,0],
ref.sh: line 11: 80610 Segmentation fault (core dumped) python $Project_dir/train1.py --config $Project_dir/configs/ref.txt --exp_name building --root_dir $DATA_ROOT --sem_conf_path $SEM_CONF --sem_ckpt_path $SEM_CKPT
/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument.
(Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
Congrats to the excellent work! I'm very interested in the large-scale scene examples shown in the project page's Rising Sea-level Simulation section. Could you release the dataset and training code for this part? That will be very helpful for comparison. Thanks.
First, thanks for your excellent work! When I 'm trying to build this project, I have a question that how should I “Download checkpoint of MTMT from official repo.”? Should I download the [Trained Model] (https://github.com/eraserNut/MTMT#trained-model) from the MTMT link? And where should I put the ckpt file? Thanks.
thanks for your work!
during training, ipdb find the nan values:
sigmas contains nan
grads contains nan
normals_raw contains nan
normals_pred contains nan
nan in results[sigma]
nan in results[opacity]
nan in results[depth]
nan in results[rgb]
nan in results[normal_pred]
nan in results[semantic]
nan in results[ws]
nan in results[Ro]
nan in results[Rp]
I want to know how to solve this problem, thank you very much!
Hi, thanks for your great work. I've trained my model on the TNT dataset, focusing on the playground data. After training, I've tested it with smog and snow. Now, I'm eager to apply the same procedure to my custom dataset. Can you help me understand the training process and how to generate the checkpoint?
How to compute "plane.npy" for my own data?
I trained my colmap dataset with the following config, but I got an error like this. I used "plane.npy" from "tnt" dataset for this process.However, how to compute plane.npy based on our dataset?
Dataset: 115 images (1080*1920)
root_dir = ./ref
dataset_name = colmap
exp_name = building
batch_size = 2048
scale = 8.0
num_epochs = 80
### render a camera path(through interpolation between poses)
render_traj = True
### render camera poses from training dataset
render_train = False
render_rgb = True
render_depth = False
### render derived normal or not
render_normal = True
### render semantic labels or not, set to False if no g.t. semantic labels
render_semantic = True
sem_conf_path = ../mseg_ckps/segformer_mit-b5_8xb1-160k_cityscapes-1024x1024.py
sem_ckpt_path = ../mseg_ckps/segformer_mit-b5_8x1_1024x1024_160k_cityscapes_20211206_072934-87a052ec.pth
styl_img_path = ./styl_img/winter.jpg
### appearance embeddings
embed_a = True
embed_a_len = 8
### mask embeddings
embed_msk = False
random_bg = True
use_skybox = False
## smog
depth_bound = 0.9
sigma = 0.5
rgb_smog = [0.925, 0.906, 0.758]
## flood
depth_path = /home/user/PycharmProjects/Climate_NeRF/scripts/tanks/results/tnt/playground/depth_raw.npy
water_height = 0.0
rgb_water = [0.488, 0.406, 0.32]
refraction_idx = 1.35
gf_r = 5
gf_eps = 0.1
gl_theta = 0.008
gl_sharpness = 500
wave_len = 0.2
wave_ampl = 500000
The error:
File "/home/user/PycharmProjects/Climate_NeRF/models/rendering.py", line 235, in __render_rays_test
mask = simulator.simulate_after_marching(**sim_kwargs)
File "/home/user/PycharmProjects/Climate_NeRF/simulate.py", line 260, in simulate_after_marching
is_water = depth_to_water < depth
RuntimeError: The size of tensor a (600000) must match the size of tensor b (552384) at non-singleton dimension 0
Thank you for your work. When I run the code python train.py --config /home/alex/Climate_NeRF/configs/Garden.txt --exp_name playground --root_dir $DATA_ROOT --sem_conf_path $SEM_CONF --sem_ckpt_path $SEM_CKPT, I encounter this error. Can you tell me the reason for the error?
python train.py --config /home/alex/Climate_NeRF/configs/Garden.txt --exp_name playground --root_dir $DATA_ROOT --sem_conf_path $SEM_CONF --sem_ckpt_path $SEM_CKPT
GridEncoding for spital: Nmin=16 b=1.58740 F=2 T=2^19 L=16
GridEncoding for RGB: Nmin=16 b=1.25057 F=2 T=2^19 L=32
rgb_input_dim: 80
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Load segmentation model from /home/alex/Climate_NeRF/CHECK/segformer_mit-b5_8x1_1024x1024_160k_cityscapes_20211206_072934-87a052ec.pth
Loads checkpoint by local backend from path: /home/alex/Climate_NeRF/CHECK/segformer_mit-b5_8x1_1024x1024_160k_cityscapes_20211206_072934-87a052ec.pth
scene up tensor([-0.0017, -0.8823, -0.4706])
scene scale 4.495019486110887
Loading 161 train images ...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 161/161 [00:49<00:00, 3.23it/s]
Load segmentation model from /home/alex/Climate_NeRF/CHECK/segformer_mit-b5_8x1_1024x1024_160k_cityscapes_20211206_072934-87a052ec.pth
Loads checkpoint by local backend from path: /home/alex/Climate_NeRF/CHECK/segformer_mit-b5_8x1_1024x1024_160k_cityscapes_20211206_072934-87a052ec.pth
scene up tensor([-0.0017, -0.8823, -0.4706])
scene scale 4.495019486110887
Loading 24 test images ...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 24/24 [00:07<00:00, 3.26it/s]
Loading 185 camera path ...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 185/185 [00:00<00:00, 501.26it/s]
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last):
return F.linear(input, self.weight, self.bias)
RuntimeError: expected scalar type Float but found Half
how to got custom plane.npy?
Thanks for your help. I could run the training step, but in the middle of running, I got this error.
Load segmentation model from /home/user/PycharmProjects/Climate_NeRF/mmsegmentation/segformer_mit-b5_8x1_1024x1024_160k_cityscapes_20211206_072934-87a052ec.pth
Loads checkpoint by local backend from path: /home/user/PycharmProjects/Climate_NeRF/mmsegmentation/segformer_mit-b5_8x1_1024x1024_160k_cityscapes_20211206_072934-87a052ec.pth
up vector: tensor([ 0.0260, -0.9995, -0.0168])
scene scale 1.2108299343006486
render camera path
Loading 32 test images...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:12<00:00, 2.64it/s]
Loading 200 camera path ...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [00:00<00:00, 900.58it/s]
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/user/PycharmProjects/Climate_NeRF/train.py", line 425, in <module>
trainer.fit(system, ckpt_path=hparams.ckpt_path)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 696, in fit
self._call_and_handle_interrupt(
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 735, in _fit_impl
results = self._run(model, ckpt_path=self.ckpt_path)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1166, in _run
results = self._run_stage()
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1252, in _run_stage
return self._run_train()
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1283, in _run_train
self.fit_loop.run()
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
self.advance(*args, **kwargs)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 271, in advance
self._outputs = self.epoch_loop.run(self._data_fetcher)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
self.advance(*args, **kwargs)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 203, in advance
batch_output = self.batch_loop.run(kwargs)
File "/home/ab/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
self.advance(*args, **kwargs)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 87, in advance
outputs = self.optimizer_loop.run(optimizers, kwargs)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
self.advance(*args, **kwargs)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 201, in advance
result = self._run_optimization(kwargs, self._optimizers[self.optim_progress.optimizer_position])
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 248, in _run_optimization
self._optimizer_step(optimizer, opt_idx, kwargs.get("batch_idx", 0), closure)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 358, in _optimizer_step
self.trainer._call_lightning_module_hook(
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1550, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/core/module.py", line 1705, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 168, in step
step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 216, in optimizer_step
return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 153, in optimizer_step
return optimizer.step(closure=closure, **kwargs)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
return wrapped(*args, **kwargs)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/torch/optim/optimizer.py", line 88, in wrapper
return func(*args, **kwargs)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/torch/optim/adam.py", line 100, in step
loss = closure()
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 138, in _wrap_closure
closure_result = closure()
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 146, in __call__
self._result = self.closure(*args, **kwargs)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 132, in closure
step_output = self._step_fn()
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 407, in _training_step
training_step_output = self.trainer._call_strategy_hook("training_step", *kwargs.values())
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1704, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 358, in training_step
return self.model.training_step(*args, **kwargs)
File "/home/user/PycharmProjects/Climate_NeRF/train.py", line 250, in training_step
self.model.update_density_grid(0.01*MAX_SAMPLES/3**0.5,
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/user/PycharmProjects/Climate_NeRF/models/networks.py", line 335, in update_density_grid
density_grid_tmp[c, indices] = self.density(xyzs_w)
File "/home/user/PycharmProjects/Climate_NeRF/models/networks.py", line 160, in density
h = self.xyz_net(h)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user/anaconda3/envs/climatenerf/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 103, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: expected scalar type Float but found Half
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.