Giter VIP home page Giter VIP logo

satellitesfm's Introduction

Satellite Structure from Motion

Maintained by Kai Zhang.

Why this repo?

I started my computer vision research journey with satellite stereo being my first project. Working on that problem makes me feel that there seems to be an unnesseary[?] gap between how the stereo problems are approached in the computer vision community and remote sensing community. And moreover, satellite images seem to attract relatively less attention from the vast computer vision community. I was guessing perhaps this was due to the limited satellite image availability, which seems to be improving these days. With the increasing availability of satellite datasets, I am hoping to further simplify the access to satellite stereo problems for computer vision researchers' and practitioners' with this repo.

Development roadmaps (Open-source contributions are always welcome!)

  • release SatelliteSfM
  • release SatelliteNeRF as downstream neural rendering applications
  • release scripts to visualize SatelliteSfM output cameras in 3D
  • release TRACK 3: MULTI-VIEW SEMANTIC STEREO data preprocessed by SatelliteSfM
  • re-write ColmapForVisSat as patches to latest Colmap: SfM first, followed by MVS, and finally meshing. You can find the re-written version ColmapForVisSatPatched. Thanks to @SBCV.
  • release SatelliteNeuS that can reconstruct meshes from multi-date satellite images with varying illuminations
  • draw a road map
  • improve documentations of the SatellitePlaneSweep and SatelliteNeRF and SatelliteNeuS
  • port SatelliteSurfaceReconstruction meshing algorithm to the new API
  • release Deep Satellite Stereo as downstream MVS algorithms
  • release code to rectify satellite stereo pairs based on the SatelliteSfM outputs
  • release code to run stereo matching on rectified stereo pairs, including both classical and deep ones

roadmap

Relevant repos for downstream applications

Overview

  • This is a library dedicated to solving the satellite structure from motion problem.
  • It's a wrapper of the VisSatSatelliteStereo repo for easier use.
  • The outputs are png images and OpenCV-compatible pinhole cameras readily deployable to multi-view stereo pipelines targetting ground-level images.

Installation

Assume you are on a Linux machine with at least one GPU, and have conda installed. Then to install this library, simply by:

. ./env.sh

Inputs

We assume the inputs to be a set of .tif images encoding the 3-channel uint8 RGB colors, and the metadata like RPC cameras. This data format is to align with the public satellite benchmark: TRACK 3: MULTI-VIEW SEMANTIC STEREO. Download one example data from this google drive; folder structure look like below:

- examples/inputs
    - images/
        - *.tif
        - *.tif
        - *.tif
        - ...
    - latlonalt_bbx.json

, where latlonalt_bbx.json specifies the bounding box for the site of interest in the global (latitude, longitude, altitude) coordinate system.

If you are not sure what is a reasonably good altitude range, you can put random numbers in the json file, but you have to enable the --use_srtm4 option below.

Run Structure from Motion

python satellite_sfm.py --input_folder examples/inputs --output_folder examples/outputs --run_sfm [--use_srtm4] [--enable_debug]

The --enable_debug option outputs some visualization helpful debugging the structure from motion quality.

Outputs

  • {output_folder}/images/ folder contains the png images
  • {output_folder}/cameras_adjusted/ folder contains the bundle-adjusted pinhole cameras; each camera is represented by a pair of 4x4 K, W2C matrices that are OpenCV-compatible.
  • {output_folder}/enu_bbx_adjusted.json contains the scene bounding box in the local ENU Euclidean coordinate system.
  • {output_folder}/enu_observer_latlonalt.json contains the observer coordinate for defining the local ENU coordinate; essentially, this observer coordinate is only necessary for coordinate conversion between local ENU and global latitude-longitude-altitude.

If you turn on the --enable_debug option, you might want to dig into the folder {output_folder}/debug_sfm for visuals, etc.

Citations

@inproceedings{VisSat-2019,
  title={Leveraging Vision Reconstruction Pipelines for Satellite Imagery},
  author={Zhang, Kai and Sun, Jin and Snavely, Noah},
  booktitle={IEEE International Conference on Computer Vision Workshops},
  year={2019}
}

@inproceedings{schoenberger2016sfm,
  author={Sch\"{o}nberger, Johannes Lutz and Frahm, Jan-Michael},
  title={Structure-from-Motion Revisited},
  booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2016},
}

Example results

input images

Input images

sparse point cloud ouput by SfM

Sparse point cloud

Visualize cameras

python visualize_satellite_cameras.py

Red, Green, Blue axes denote east, north, up directions, respectively. For simplicity, each camera is represented by a line pointing from origin to that camera center. Visualize cameras

homograhpy-warp one view, then average with another by a plane sequence

Sweep plane high-res video

inspect epipolar geometry

python inspect_epipolar_geometry.py

inspect epipolar

get zero-skew intrincics marix

python skew_correct.py --input_folder ./examples/outputs ./examples/outputs_zeroskew

skew correct

Downstream applications

One natural task following this SatelliteSfM is to acquire the dense reconstruction by classical patch-based MVS, or mordern deep MVS, or even neural rendering like NeRF. When working with these downstream algorithms, be careful of the float32 pitfall caused by the huge depth values as a result of satellite cameras being distant from the scene; this is particularly worthy of attention with the prevalent float32 GPU computing.

[Note: this SatelliteSfM library doesn't have such issue for the use of float64.]

pitfall of float32 arithmetic

numeric precison

overcome float32 pitfall for NeRF

Center and scale scene to be inside unit sphere by:

python normalize_sfm_reconstruction.py

Modify how pixel2ray is computed for NeRF-based models, while keeping the other parts unchanged:

import torch

def pixel2ray(col: torch.Tensor, row: torch.Tensor, K: torch.DoubleTensor, W2C: torch.DoubleTensor):
    '''
    Assume scene is centered and inside unit sphere.

    col, row: both [N, ]; float32
    K, W2C: 4x4 opencv-compatible intrinsic and W2C matrices; float64

    return:
        ray_o, ray_d: [N, 3]; float32
    '''
    C2W = torch.inverse(W2C)  # float64
    px = torch.stack((col, row, torch.ones_like(col)), axis=-1).unsqueeze(-1)  # [N, 3, 1]; float64
    K_inv = torch.inverse(K[:3, :3]).unsqueeze(0).expand(px.shape[0], -1, -1)  # [N, 3, 3]; float64
    c2w_rot = C2W[:3, :3].unsqueeze(0).expand(px.shape[0], -1, -1) # [N, 3, 3]; float64
    ray_d = torch.matmul(c2w_rot, torch.matmul(K_inv, px.double())) # [N, 3, 1]; float64
    ray_d = (ray_d / ray_d.norm(dim=1, keepdims=True)).squeeze(-1) # [N, 3]; float64

    ray_o = C2W[:3, 3].unsqueeze(0).expand(px.shape[0], -1) # [N, 3]; float64
    # shift ray_o along ray_d towards the scene in order to shrink the huge depth
    shift = torch.norm(ray_o, dim=-1) - 5.  # [N, ]; float64; 5. here is a small margin
    ray_o = ray_o + ray_d * shift.unsqueeze(-1)  # [N, 3]; float64
    return ray_o.float(), ray_d.float()
JAX_168_compressed.mp4
JAX_167_compressed.mp4
JAX_166_compressed.mp4
JAX_165_compressed.mp4
JAX_164_compressed.mp4
JAX_161_compressed.mp4
JAX_156_compressed.mp4
OMA_331_compressed.mp4
OMA_383_compressed.mp4
JAX_416_compressed.mp4

overcome float32 pitfall for neural point based graphics

to be filled...

overcome float32 pitfall for plane sweep stereo, or patch-based stereo, or deep stereo

to be filled...

preprocessed satellite multi-view stereo dataset with ground-truth

This dataset can be used for evaluating multi-view stereo, running neural rendering, etc. You can download it from google drive.

More handy scripts are coming

Stay tuned :-)

satellitesfm's People

Contributors

kai-46 avatar pmoulon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

satellitesfm's Issues

What downstream algorithm adopted to achieve the effect of JAX_166_compressed.mp4 ?

Hi Kai,
Thanks for providing this awesome repo !
Recently, we want to use some satellite images to finish remote sensing large-scale city reconstruction. And we find the effect of JAX_166_compressed.mp4 is very meet our need. But we can't find any certainly content of downstream algorithm description you adopted. Is Neus or MVS ?
Can you give us some helps ? We also want to obtain similar results from you.

I have a problem using satellite_sfm.py

I have a problem using satellite_sfm.py
The error is reported as follows:
(SatelliteSfM) root@LAPTOP-MOJP7CII:/mnt/d/SatelliteSfM/SatelliteSfM# python satellite_sfm.py --input_folder examples/inputs --output_folder examples/outputs --run_sfm [--use_srtm4] [--enable_debug]
Traceback (most recent call last):
File "/root/anaconda3/envs/SatelliteSfM/lib/python3.8/site-packages/osgeo/init.py", line 30, in swig_import_helper return importlib.import_module(mname)
File "/root/anaconda3/envs/SatelliteSfM/lib/python3.8/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1014, in _gcd_import
File "", line 991, in _find_and_load
File "", line 975, in _find_and_load_unlocked
File "", line 657, in _load_unlocked
File "", line 556, in module_from_spec
File "", line 1166, in create_module
File "", line 219, in _call_with_frames_removed
ImportError: libpoppler.so.126: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "satellite_sfm.py", line 4, in
from preprocess.preprocess_image_set import preprocess_image_set
File "/mnt/d/SatelliteSfM/SatelliteSfM/preprocess/preprocess_image_set.py", line 12, in
from preprocess.parse_tif_image import parse_tif_image
File "/mnt/d/SatelliteSfM/SatelliteSfM/preprocess/parse_tif_image.py", line 3, in
from osgeo import gdal, gdalconst
File "/root/anaconda3/envs/SatelliteSfM/lib/python3.8/site-packages/osgeo/init.py", line 46, in
_gdal = swig_import_helper()
File "/root/anaconda3/envs/SatelliteSfM/lib/python3.8/site-packages/osgeo/init.py", line 43, in swig_import_helper return importlib.import_module('_gdal')
File "/root/anaconda3/envs/SatelliteSfM/lib/python3.8/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ModuleNotFoundError: No module named '_gdal'
Last display:ModuleNotFoundError: No module named '_gdal'
But I have installed gdal,I can't solve it now. Do you have any suggestions?

Apply the code on the own PAN dataset

Hi Kai,

I've been noticed the input dataset are .tif images encoding the 3-channel uint8 RGB colors. But now, I have my own PAN dataset that have only one channel the detail are as followed:

  • PAN_SEN_PWOI_000004990_1_2_F_1_RPC.TXT
  • DIM_PAN_SEN_PWOI_000004990_1_2_F_1.XML
  • PAN_SEN_PWOI_000004990_1_2_F_1_P_R2C1.TFW
  • IMG_PAN_SEN_PWOI_000004990_1_2_F_1_P_R2C1.TIF
  • PREVIEW_PAN_SEN_PWOI_000004990_1_2_F_1.jpg
  • RPC_PAN_SEN_PWOI_000004990_1_2_F_1.XML

The .TIF are PAN image with one channel. I tried to revise the code with np.expand_dims to expand the channel from 1 to 3 and commend out date_time. When run python satellite_sfm.py --input_folder examples/my_own_data --output_folder examples/outputs_my_own_data --run_sfm --use_srtm4, I got the following bug:

bug.txt

So, I wondered how can I apply the code on my own PAN dataset? Or just have something run? Cause I dont know how to set the latlonalt_bbx.json but to use the flag --use_srtm4.

Thank you so much

File name changes when using ColmapForVisSatPatched

Hey,

I used ColmapForVisSatPatched during Installation, but encountered some errors I believe are due to a newer Colmap Version being used than in the original ColmapForVisSat.

In detail:

  1. The python scripts are contained inside Colmap/scripts/python, but SateliteSfM expects these to be inside preprocess_sfm/colmap/.
  2. The read_model.py script has been renamed to read_write_model.py (relevant commit). SatelliteSfM still expects this file to be named read_model.py, so i had to manually rename it.

After these changes everything seems to work as expected, the sample provided runs through without any issues.

Best regards,
Valentin

How to enable --use_srtm4 option in command?

@Kai-46 How to enable --use_srtm4 option in command?

On writing this command I am getting the error.

Command written:

python3 satellite_sfm.py --input_folder /Users/jaskiratsingh/IIIT-Hyderabad-Research/SatelliteSfM_Input_Images --output_folder /Users/jaskiratsingh/IIIT-Hyderabad-Research/SatelliteSfM_Output_Image --run_sfm [--use_srtm4] [--enable_debug]

Error:

zsh: no matches found: [--use_srtm4]

Can you help me know how can I resolve this?

Thanks!

Not able to install adpated Colmap

Hello,

I am running into some problems when calling install_colmapforvissat.sh

At first, everything seems fine and it starts building and downloading different components, like ceres and such. However, when it reaches the moment of building colmap_cuda, it runs into the following error:

[ 25%] Built target colmap_cuda
[ 26%] Building C object lib/VLFeat/CMakeFiles/vlfeat.dir/scalespace.c.o
In file included from /home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.h:21,
from /home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c:363:
/home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c: In function ‘_vl_kmeans_quantize_f’:
/home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/mathop.h:92:37: error: ‘vl_infinity_d’ not specified in enclosing ‘parallel’
92 | #define VL_INFINITY_D (vl_infinity_d.value)
| ~~~~~~~~~~~~~~^~~~~~~
/home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c:685:34: note: in expansion of macro ‘VL_INFINITY_D’
685 | TYPE bestDistance = (TYPE) VL_INFINITY_D ;
| ^~~~~~~~~~~~~
In file included from /home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c:1782:
/home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c:672:9: error: enclosing ‘parallel’
672 | #pragma omp parallel default(none)
| ^~~
In file included from /home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c:1788:
/home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c: In function ‘_vl_kmeans_quantize_d’:
/home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c:685:27: error: ‘vl_infinity_d’ not specified in enclosing ‘parallel’
685 | TYPE bestDistance = (TYPE) VL_INFINITY_D ;
/home/guri_ar/3drend/SatelliteSfM/preprocess_sfm/ColmapForVisSat/lib/VLFeat/kmeans.c:672:9: error: enclosing ‘parallel’
672 | #pragma omp parallel default(none)
| ^~~
make[2]: *** [lib/VLFeat/CMakeFiles/vlfeat.dir/build.make:258: lib/VLFeat/CMakeFiles/vlfeat.dir/kmeans.c.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [CMakeFiles/Makefile2:902: lib/VLFeat/CMakeFiles/vlfeat.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 56%] Built target colmap
make: *** [Makefile:141: all] Error 2
Command failed: cmake --build . --target install --config Release -- -j8

I have tried to build it myself from source by running cmake and then make and it run into the same problem. My cmake configuration is able to detect my CUDA config and seems to work fine.

My specs are:

Ubuntu 20.04
gcc g++ version 9
nvcc version 11.3
cmake version 3.10
Libboost version 1.71.0

Many thanks in advance for your help and this great project !!

Hello, I'm running :bash install_colmapforvissat.sh and I'm getting the following error:

Traceback (most recent call last):
File "ColmapForVisSat/scripts/python/build.py", line 545, in
main()
File "ColmapForVisSat/scripts/python/build.py", line 524, in main
build_glew(args)
File "ColmapForVisSat/scripts/python/build.py", line 314, in build_glew
download_zipfile(url, archive_path, args.build_path,
File "ColmapForVisSat/scripts/python/build.py", line 189, in download_zipfile
urllib.request.urlretrieve(url, archive_path)
File "/home/cver/anaconda3/envs/SatelliteSfM/lib/python3.8/urllib/request.py", line 247, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "/home/cver/anaconda3/envs/SatelliteSfM/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/home/cver/anaconda3/envs/SatelliteSfM/lib/python3.8/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/home/cver/anaconda3/envs/SatelliteSfM/lib/python3.8/urllib/request.py", line 542, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "/home/cver/anaconda3/envs/SatelliteSfM/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/home/cver/anaconda3/envs/SatelliteSfM/lib/python3.8/urllib/request.py", line 1397, in https_open
return self.do_open(http.client.HTTPSConnection, req,
File "/home/cver/anaconda3/envs/SatelliteSfM/lib/python3.8/urllib/request.py", line 1357, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno -2] Name or service not known>
It seems that the follow link is broken:
image

Some issues during ColmapforVisSat installation

  1. In line 311, ColmapForVisSat/scripts/python/build.py :
def build_glew(args):
    path = os.path.join(args.build_path, "glew")
    if os.path.exists(path):
        return

    url = "https://kent.dl.sourceforge.net/project/glew/" \
          "glew/2.1.0/glew-2.1.0.zip"

The URL is not available now, so I replaced it with the following one and it works for me:

url = "https://sourceforge.net/projects/glew/files/glew/2.1.0/glew-2.1.0.zip/download"

  1. Boost related issues:
CMake Error at /home/jojo/anaconda3/envs/SatelliteSfM/lib/cmake/Boost-1.74.0/BoostConfig.cmake:141 (find_package):
  Found package configuration file:

    /home/jojo/anaconda3/envs/SatelliteSfM/lib/cmake/boost_program_options-1.74.0/boost_program_options-config.cmake

  but it set boost_program_options_FOUND to FALSE so package
  "boost_program_options" is considered to be NOT FOUND.  Reason given by
  package:

  No suitable build variant has been found.

  The following variants have been tried and rejected:

  * libboost_program_options.so.1.74.0 (shared, Boost_USE_STATIC_LIBS=ON)

Call Stack (most recent call first):
  /home/jojo/anaconda3/envs/SatelliteSfM/lib/cmake/Boost-1.74.0/BoostConfig.cmake:258 (boost_find_component)
  /usr/share/cmake-3.16/Modules/FindBoost.cmake:443 (find_package)
  CMakeLists.txt:94 (find_package)

According to

 Found package configuration file:

    /home/jojo/anaconda3/envs/SatelliteSfM/lib/cmake/boost_program_options-1.74.0/boost_program_options-config.cmake

  but it set boost_program_options_FOUND to FALSE so package
  "boost_program_options" is considered to be NOT FOUND. 

In the file of /home/jojo/anaconda3/envs/SatelliteSfM/lib/cmake/boost_program_options-1.74.0/boost_program_options-config.cmake line 71
I set set(boost_program_options_FOUND 0) to set(boost_program_options_FOUND 1), and it works.

Similarly, I changed the other boost related files:

...cmake/boost_filesystem-1.74.0/boost_filesystem-config.cmake
...cmake/boost_graph-1.74.0/boost_graph-config.cmake
...cmake/boost_regex-1.74.0/boost_regex-config.cmake
...cmake/boost_system-1.74.0/boost_system-config.cmake
...cmake/boost_unit_test_framework-1.74.0/boost_unit_test_framework-config.cmake

They are all in line 71, and I set set(boost_{packagename}_FOUND 0) to set(boost_{packagename}_FOUND 1).
Since I don't want to change the settings of my environment, I just manually change the files one by one.

If you have another way to do it easily, I would appreciate it if you share it with me.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.