Giter VIP home page Giter VIP logo

satellite-super-resolution's Introduction

Satlas Super Resolution

Satlas aims to provide open AI-generated geospatial data that is highly accurate, available globally, and updated on a frequent (monthly) basis. One of the data applications in Satlas is globally generated Super-Resolution imagery for 2023.

animated animated

We describe the many findings that led to the global super-resolution outputs in the paper, Zooming Out on Zooming In: Advancing Super-Resolution for Remote Sensing. Supplementary material is available here.

This repository contains the training and inference code for the AI-generated Super-Resolution data found at https://satlas.allen.ai/, as well as code, data, and model weights corresponding to the paper.

Download

Data

There are two training sets:

  • The full set (train_full_set), consisting of ~44million pairs from all locations where NAIP imagery was available between 2019-2020.
  • The urban set (train_urban_set), with ~1.2 million pairs from locations within a 5km radius of cities in the USA with a population >= 50k.

The urban set was used for all experiments in the paper, because we found the full set to be overwhelmed with monotonous landscapes.

There are three val/test sets:

  • The validation set (val_set) consists of 8192 image pairs.
  • A small subset of this validation set (small_val_set) with 256 image pairs that are specifically from urban areas, which is useful for qualititive analysis.
  • A test set containing eight 16x16 grids of Sentinel-2 tiles from interesting locations including Dry Tortugas National Park, Bolivia, France, South Africa, and Japan.

All of the above data (except for the full training set due to size) can be downloaded at this link. The full training set is available for download at this link.

Model Weights

The weights for ESRGAN models, used to generate super-resolution outputs for Satlas, with varying number of Sentinel-2 images as input are available for download at these links:

The weights for the L2 loss-based models trained on our S2-NAIP dataset:

We are working to upload all models and pretrained weights corresponding to the paper.

S2-NAIP Dataset Structure

The dataset consists of image pairs from Sentinel-2 and NAIP satellites, where a pair is a time series of Sentinel-2 images that overlap spatially and temporally [within 3 months] with a NAIP image. The imagery is from 2019-2020 and is limited to the USA.

The images adhere to the same Web-Mercator tile system as in SatlasPretrain.

NAIP

The NAIP images included in this dataset are 25% of the original NAIP resolution. Each image is 128x128px.

In each set, there is a naip folder containing images in this format: naip/image_uuid/tci/1234_5678.png.

Sentinel-2

For each NAIP image, there is a time series of corresponding 32x32px Sentinel-2 images. These time series are saved as pngs in the shape, [number_sentinel2_images * 32, 32, 3]. Before running this data through the models, the data is reshaped to [number_sentinel2_images, 32, 32, 3].

In each set, there is a sentinel2 folder containing these time series in the format: sentinel2/1234_5678/X_Y.png where X,Y is the column and row position of the NAIP image within the current Sentinel-2 image.

Model

In the paper, we experiment with SRCNN, HighResNet, SR3, and ESRGAN. For a good balance of output quality and inference speed, we use the ESRGAN model for generating global super-resolution outputs.

Our ESRGAN model is an adaptation of the original ESRGAN, with changes that allow the input to be a time series of Sentinel-2 images. All models are trained to upsample by a factor of 4.

Training

To train a model on this dataset, run the following command, with the desired configuration file:

python -m ssr.train -opt ssr/options/esrgan_s2naip_urban.yml

There are several sample configuration files in ssr/options/. Make sure the configuration file specifies correct paths to your downloaded data, the desired number of low-resolution input images, model parameters, and pretrained weights (if applicable).

Add the --debug flag to the above command if wandb logging, model saving, and visualization creation is not wanted.

Inference

To run inference on the provided validation or test sets, run the following command (--data_dir should point to your downloaded low-resolution data):

python -m ssr.infer --data_dir satlas-super-resolution-data/{val,test}_set/sentinel2/ --weights_path PATH_TO_WEIGHTS --n_s2_images NUMBER_S2_IMAGES --save_path PATH_TO_SAVE_OUTPUTS

When running inference on an entire Sentinel-2 tile (consisting of a 16x16 grid of chunks), there is a --stitch flag that will stitch the individual chunks together into one large image.

Try this feature out on the test set:

python -m ssr.infer --data_dir satlas-super-resolution-data/test_set/sentinel2/ --stitch

Accuracy

There are instances where the generated super resolution outputs are incorrect.

Specifically:

  1. Sometimes the model generates vessels in the water or cars on a highway, but because the input is a time series of Sentinel-2 imagery (which can span a few months), it is unlikely that those things persist in one location.

  1. Sometimes the model generates natural objects like trees or bushes where there should be a building, or vice versa. This is more common in places that look vastly different from the USA, such as the example below in Kota, India.

Acknowledgements

Thanks to these codebases for foundational Super-Resolution code and inspiration:

BasicSR

Real-ESRGAN

Contact

If you have any questions, please email [email protected].

satellite-super-resolution's People

Contributors

piperwolters avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.