Giter VIP home page Giter VIP logo

sps-nerf's Introduction

SpS-NeRF

Lulin Zhang, Ewelina Rupnik

This work has been accepted in the ISPRS Annals 2023 and won the Best Workshop Paper Award.

Setup

Compulsory

The following steps are compulsory for running this repository:

  1. Clone the git repository
git clone https://github.com/LulinZhang/SpS-NeRF.git
  1. Create virtualenv spsnerf
conda init
bash -i setup_spsnerf_env.sh

Optional

If you want to prepare the dataset yourself, you'll need to create virtualenv ba:

conda init
bash -i setup_ba_env.sh

1. Prepare dataset

You can skip this step and directly download the DFC2019 dataset AOI 214.

You need to prepare a directory ProjDir to place the dataset.

1.1. Refine RPC with bundle adjustment

Please use command conda activate ba to get into the ba environment for this step.

BaseDir=/home/LZhang/Documents/CNESPostDoc/SpSNeRFProj/input_prepare_data/
aoi_id=JAX_214
DataDir="$BaseDir"DFC2019/
OutputDir="$BaseDir"
python3 create_satellite_dataset.py --aoi_id "$aoi_id" --dfc_dir "$DataDir" --output_dir "$OutputDir" 

Please replace the values from first to third lines in the above script to your own value.

In your DataDir, it should contain the RGB images, ground truth DSM and other text files to indicate necessary information. Please refer to our example for file organization.

1.2. Generate dense depth

You can skip this step and directly download the Dense depth of DFC2019 dataset AOI 214 and put it in your TxtDenseDir.

Option 1: Use software MicMac

In our experiments, this step is done with the free, open-source photogrammetry software MicMac. You need to install MicMac following this websit.

MicMac could not read the original JAX tif format of the training images, you need to convert the images before launching MicMac, by QGIS for example (with "raster -> conversion -> convert"), or you can download the images we converted.

You'll need the WGS84toUTM.xml for coordinate transformation.

BaseDir=/home/LZhang/Documents/CNESPostDoc/SpSNeRFProj/input_prepare_data/
aoi_id=JAX_214
DataDir="$BaseDir"DFC2019/
RootDir="$BaseDir"JAX_214_2_imgs/
TxtDenseDir="$RootDir"dataset"$aoi_id"/root_dir/crops_rpcs_ba_v2/"$aoi_id"/DenseDepth_ZM4/
MicMacDenseDir="$RootDir"DenseDepth/
CodeDir=/home/LZhang/Documents/CNESPostDoc/SpSNeRFProj/code/SpS-NeRF/

mkdir "$MicMacDenseDir"
mkdir "$TxtDenseDir"

#copy the images and refined rpc parameters
for line in `cat "$DataDir"train.txt`
do
	img_name=${line%.*}
	cp "$DataDir"RGB/"$aoi_id"/"$img_name".tif "$MicMacDenseDir""$img_name".tif
	cp "$RootDir"ba_files/rpcs_adj/"$img_name".rpc_adj "$MicMacDenseDir""$img_name".txt
done
cp "$DataDir"WGS84toUTM.xml "$MicMacDenseDir"WGS84toUTM.xml
cd "$MicMacDenseDir"

#convert rpc to the MicMac format
mm3d Convert2GenBundle "(.*).tif" "\$1.txt" RPC-d0-adj ChSys=WGS84toUTM.xml Degre=0

for line in `cat "$DataDir"train.txt`
do
	img_name=${line%.*}
	#generate dense depth in tif format
	mm3d Malt GeomImage ".*tif" RPC-d0-adj Master="$img_name".tif SzW=1 Regul=0.05 NbVI=2 ZoomF=4 ResolTerrain=1 EZA=1 DirMEC=MM-"$img_name"/
	#convert dense depth tif to txt format
	mm3d TestLib GeoreferencedDepthMap MM-"$img_name" "$img_name".tif Ori-RPC-d0-adj OutDir="$TxtDenseDir" Mask=1 Scale=4
done

cd "$CodeDir"
#Transform 3D points from UTM to geocentric coordinates.
python3 utm_to_geocentric.py --file_dir "$TxtDenseDir"

Please replace the values from first to third, and sixth lines in the above script to your own value.

Option 2: Use other software

It is also possible if you prefer to use other software, just make sure your final result is organized this way:

  • TxtDenseDir
    • ImageName_2DPts.txt: 2D coordinate in image frame for the pixels with valid depth value. The first line is width, and the second line is height.
    • ImageName_3DPts.txt: 3D coordinate in UTM for the pixels with valid depth value.
    • ImageName_3DPts_ecef.txt: 3D coordinate in geocentric coordinates for the pixels with valid depth value.
    • ImageName_Correl.txt: correlation score for the pixels with valid depth value.

Each image ImageName corresponds to four txt files as displayed below.

2. Train SpS-NeRF

Please use command conda activate spsnerf to get into the spsnerf environment for this step.

aoi_id=JAX_214
inputdds=DenseDepth_ZM4
n_importance=0
ds_lambda=1
stdscale=1
ProjDir=/gpfs/users/lzhang/SpS-NeRF_test/
exp_name=SpS_output"$aoi_id"-"$inputdds"-FnMd"$n_importance"-ds"$ds_lambda"-"$stdscale"
Output="$ProjDir"/"$exp_name"
rm -r "$Output"
mkdir "$Output"

python3 main.py --aoi_id "$aoi_id" --model sps-nerf --exp_name "$exp_name" --root_dir "$ProjDir"/dataset"$aoi_id"/root_dir/crops_rpcs_ba_v2/"$aoi_id"/ --img_dir "$ProjDir"/dataset"$aoi_id"/"$aoi_id"/RGB-crops/"$aoi_id"/ --cache_dir "$Output"/cache_dir/crops_rpcs_ba_v2/"$aoi_id" --gt_dir "$ProjDir"/dataset"$aoi_id"/"$aoi_id"/Truth --logs_dir "$Output"/logs --ckpts_dir "$Output"/ckpts --inputdds "$inputdds" --gpu_id 0 --img_downscale 1 --max_train_steps 30000 --lr 0.0005 --sc_lambda 0 --ds_lambda "$ds_lambda" --ds_drop 1 --n_importance "$n_importance" --stdscale "$stdscale" --guidedsample --mapping    

Please replace the value of ProjDir in the second line in the above script to your own ProjDir.

3. Test SpS-NeRF

3.1. Render novel views

Please use command conda activate spsnerf to get into the spsnerf environment for this step.

Output=/gpfs/users/lzhang/SpSNeRFProj/DFCDataClean_2imgs/SpS_outputJAX_214-DenseDepth_ZM4-FnMd0-ds1-1/
logs_dir="$Output"/logs
run_id=SpS_outputJAX_214-DenseDepth_ZM4-FnMd0-ds1-1
output_dir="$Output"/eval_spsnerf
epoch_number=28

python3 eval.py --run_id "$run_id" --logs_dir "$logs_dir" --output_dir "$output_dir" --epoch_number "$epoch_number" --split val

Please replace the value of Output, run_id, output_dir and epoch_number in the above script to your own settings.

3.2. Generate DSM (Digital Surface Model)

Please use command conda activate spsnerf to get into the spsnerf environment for this step.

Output=/gpfs/users/lzhang/SpSNeRFProj/DFCDataClean_2imgs/SpS_outputJAX_214-DenseDepth_ZM4-FnMd0-ds1-1/
logs_dir="$Output"/logs
run_id=SpS_outputJAX_214-DenseDepth_ZM4-FnMd0-ds1-1
output_dir="$Output"/create_spsnerf_dsm
epoch_number=28

python3.6 ../../code/SpS-NeRF/create_dsm.py --run_id "$run_id" --logs_dir "$logs_dir" --output_dir "$output_dir" --epoch_number "$epoch_number"

Please replace the value of Output, run_id, output_dir and epoch_number in the above script to your own settings.

Acknowledgements

We thank satnerf and dense_depth_priors_nerf, from which this repository borrows code.

Citation

If you find this code or work helpful, please cite:

@article{zhang2023spsnerf,
   author = {Lulin Zhang and Ewelina Rupnik},
   title = {SparseSat-NeRF: Dense Depth Supervised Neural Radiance Fields for Sparse Satellite Images},
   journal = {ISPRS Annals},
   year = {2023}
}

sps-nerf's People

Contributors

lulinzhang avatar

Stargazers

~/tersite1 avatar  avatar  avatar Scan24 avatar  avatar  avatar Muhammed Sirajul Huda K avatar  avatar Alessio Comparini avatar Matthew A avatar SWJTU-LiupengSu avatar JABRANE MOUAD avatar Bharath Krishnan H avatar hust-chaos avatar Changwan Sun avatar Tim Iles avatar  avatar Sumai avatar  avatar  avatar Liu Liu avatar Francesco Fugazzi avatar ewelina rupnik avatar

Watchers

ewelina rupnik avatar  avatar Alessio Comparini avatar Francesco Fugazzi avatar

sps-nerf's Issues

About the satnerf eval results

Hello and thank you for your wonderful work!
Here I have a question for you about satnerf. I ran the code published by satnerf and tested it using the pretrained model provided, and while PSNR and SSIM were able to achieve the results reported in the article, the MAE metrics, which measure the accuracy of the DSM, differed.
I raised this issue in satnerf centreborelli/satnerf#13 but the solution he provided did not solve the problem, even the MAE = 2.900m for the Sat-NeRF model on AOI214 but reported in the article is 2.125m.I would like to ask if there is such a problem in your environment?

About spsnerf train

I would like to express my sincere appreciation for the exceptional work you have done on your GitHub repository. Your contributions to the open-source community are truly commendable.

I am reaching out to you with a few queries that I have encountered while attempting to replicate your code.

  1. During the training process, I came across a line of code that I am unable to execute: cp Sh-SpS-Train-JAX_---_2imgs.sh "$Output"/.. I am unsure about the nature of the file Sh-SpS-Train-JAX_---_2imgs.sh. Could you please provide some insight into what this script is and how it should be used?

  2. Upon running the training, I encountered the following error messages:

LightningDeprecationWarning: `pytorch_lightning.metrics.*` module has been renamed to `torchmetrics.*` and split off to its own package (https://github.com/PyTorchLightning/metrics) since v1.3 and will be removed in v1.5.
RequestsDependencyWarning: urllib3 (1.26.19) or chardet (5.0.0)/charset_normalizer (2.0.12) doesn't match a supported version!

I understand that the pytorch_lightning.metrics module has been deprecated and renamed to torchmetrics. However, I am unsure about the necessary steps to update my code to avoid this deprecation warning. Could you please advise on how to resolve this issue?

I have attached the full log of the error for your reference. Your prompt response will be highly appreciated as it will help me to continue with my project.

Thank you very much for your time and assistance. I look forward to your valuable insights.

/home/harry/anaconda3/envs/spsnerf/lib/python3.6/site-packages/pytorch_lightning/metrics/__init__.py:44: LightningDeprecationWarning: `pytorch_lightning.metrics.*` module has been renamed to `torchmetrics.*` and split off to its own package (https://github.com/PyTorchLightning/metrics)  since v1.3 and will be removed in v1.5
  "`pytorch_lightning.metrics.*` module has been renamed to `torchmetrics.*` and split off to its own package"
/home/harry/anaconda3/envs/spsnerf/lib/python3.6/site-packages/requests/__init__.py:104: RequestsDependencyWarning: urllib3 (1.26.19) or chardet (5.0.0)/charset_normalizer (2.0.12) doesn't match a supported version!
  RequestsDependencyWarning)

Running SpS_outputJAX_214-DenseDepth_ZM4-FnMd0-ds1-1 - Using gpu 0

--aoi_id:  JAX_214
--beta:  False
--sc_lambda:  0.0
--mapping:  True
--inputdds:  DenseDepth_ZM4
--ds_lambda:  1.0
--ds_drop:  1.0
--GNLL:  False
--usealldepth:  False
--guidedsample:  True
--margin:  0.0001
--stdscale:  1.0
--corrscale:  1
--model:  sps-nerf
--exp_name:  SpS_outputJAX_214-DenseDepth_ZM4-FnMd0-ds1-1
--lr:  0.0005
--n_samples:  64
--n_importance:  0
------------------------------
--root_dir:  /home/harry/SpS-NeRF/ProjDir/datasetJAX_214/root_dir/crops_rpcs_ba_v2/JAX_214/
--img_dir:  /home/harry/SpS-NeRF/ProjDir/datasetJAX_214/JAX_214/RGB-crops/JAX_214/
--ckpts_dir:  /home/harry/SpS-NeRF/ProjDir/SpS_outputJAX_214-DenseDepth_ZM4-FnMd0-ds1-1/ckpts
--logs_dir:  /home/harry/SpS-NeRF/ProjDir/SpS_outputJAX_214-DenseDepth_ZM4-FnMd0-ds1-1/logs
--gt_dir:  /home/harry/SpS-NeRF/ProjDir/datasetJAX_214/JAX_214/Truth
--cache_dir:  /home/harry/SpS-NeRF/ProjDir/SpS_outputJAX_214-DenseDepth_ZM4-FnMd0-ds1-1/cache_dir/crops_rpcs_ba_v2/JAX_214
--ckpt_path:  None
--gpu_id:  0
--batch_size:  1024
--img_downscale:  1.0
--max_train_steps:  30000
--save_every_n_epochs:  4
--fc_units:  512
--fc_layers:  8
--noise_std:  0.0
--chunk:  5120
--ds_noweights:  False
--first_beta_epoch:  2
--t_embbeding_tau:  4
--t_embbeding_vocab:  30
SatNeRF: layers, feat:  8 512
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Load SatelliteRGBDEPDataset with corrscale:  1
/home/harry/SpS-NeRF/ProjDir/datasetJAX_214/root_dir/crops_rpcs_ba_v2/JAX_214//scene.loc already exist, hence skipped scaling
Image JAX_214_009_RGB loaded ( 1 / 2 )
Image JAX_214_010_RGB loaded ( 2 / 2 )
center, range:  tensor([  798962.7500, -5452430.5000,  3200708.5000]) tensor(159.)
Depth JAX_214_009_RGB loaded ( 1 / 2 )
depth range: [0.28295, 0.71207], mean: 0.59907
corr  range: [0.00000, 1.00000], mean: 0.93883
std   range: [0.00010, 1.00010], mean: 0.06127
74.27127 percent of pixels are valid in depth map.
段错误 (核心已转储)

MicMac dense generate error

When I followed the instructions you gave me for dense_generate with micmac, I encountered some problems. This problem occurs in the section # copy the images and refined rpc parameters in 1.2 of README. An error is reported after entering the instruction: Sorry, the following FATAL ERROR happy cRPC:: ReadASCII (const std:: string & aFile) ASCII file not found.
I hope you can give me some advice. I'm stuck in this step now. Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.