Giter VIP home page Giter VIP logo

svalbardsurges's Introduction

SvalbardSurges

Installation

pyproj seems to sometimes not work properly from the conda-forge channel, and may need to be installed with pip instead.

conda remove -f pyproj
pip install pyproj

svalbardsurges's People

Contributors

eliskasieglova avatar erikmannerfelt avatar

Watchers

 avatar

Forkers

erikmannerfelt

svalbardsurges's Issues

Simplify and speed up DEM sampling

This code that you wrote is theoretically perfect! It's more or less what would be done in an element-wise assignment under the hood normally. An issue with this, however, is that python loops in practice are very slow. Maybe this works now, but if you have 8 million points, this would take days.

SvalbardSurges/main.py

Lines 160 to 187 in 806d6e2

delta_h = []
# for each IS2 measurement point
for i in range(len(is2.easting)):
# value of easting, northing and height from IS2 data
e = is2.easting.values[i]
n = is2.northing.values[i]
elev_is2 = is2.h_te_best_fit.values[i]
# value of elevation (DEM) for current pt in the
elev_dem = dem.value_at_coords(e, n)
# subtract the values
elev_difference = elev_dem - elev_is2
# if the elevation difference has a nodata value (outside of GAO boundaries), append nodata
# else append elevation difference as float
# maybe there's a smoother way of doing this - for example remove all the pts with nodata value from dataset (now there is gonna be a lot of nan values...)
if elev_difference == dem.nodata:
delta_h.append(np.nan)
else:
delta_h.append(float(elev_difference))
# create new variable and assign elevation differences to existing IS2 dataset
is2 = is2.assign(dh = delta_h)
# values are assigned as coordinates, fixes them to be assigned as values
is2_dh = is2.reset_index("dh").reset_coords("dh")

I expected this to be possible in many fewer lines, but I may be missing something. The DEM.value_at_coords() method can take arrays of coordinates, which we can use in our favour to speed things up:

is2["dem_elevation"] = "index", dem.value_at_coords(is2["easting"], is2["northing"])

Note that I haven't tried this code, so it might be more difficult.
As far as I remember, the only xarray coordinate in the dataset is one called "index", right? There's a shorthand way of saying "assign these values with the following (already existing) coordinates", which is what I've done above. Basically, you assign a two-element tuple where the first is the name of the coordinate, and the second is the value array.

It might be that DEM.value_at_coords() needs loaded values. In that case, you can change the above to one of these:

dem.value_at_coords(is2["easting"].values, is2["northing"].values)
# OR
dem.value_at_coords(is2["easting"].load(), is2["northing"].load())

When you have DEM elevations as a variable in the xarray dataset, you can easily make dh by simply subtracting the two:

is2["dh"] = is2["dem_elevation"] - is2["h_te_best_fit"]

An advantage to retaining the DEM elevation as a variable is that it should optimally be used for the hypsometric binning (#1) instead of IS2 elevations.

Hypsometric binning fix

The hypsometric binning function is almost correct! xdem.volume.hypsometric_binning takes two values of equal shape and bins them in a specified way. This means that the .shape of the first argument must be equal to the second. Note that I mentioned in my email that this function won't work at all, but I take that back after looking at the code.

When you open the raster (L199), it will have a shape of, say, (3000, 2000), while your is2 data may have a shape of something like (10000, C) (where C is the amount of columns). This will fail directly, because the hypsometric binning function has no code that transforms between different types of data (such as rasters to points).

SvalbardSurges/main.py

Lines 195 to 222 in 806d6e2

def hypsometric_binning(is2_path, ref_dem_path):
# not working
# open datasets with xarray
is2 = xr.open_dataset(is2_path) # ICESat-2 data
ref_dem = xr.open_dataset(ref_dem_path) # cropped DEM
# write crs
is2.rio.write_crs("epsg:32633", inplace=True)
ref_dem.rio.write_crs("epsg:32633", inplace=True)
# rename easting, northing to x, y (only necessary in IS2 data)
is2 = is2.rename({'easting': 'x','northing': 'y'})
# create a subset only containing variables x, y, dh
#rast.reset_index("dh").reset_coords("dh") # resets dh values from coords to variable
is2 = is2[['x', 'y', 'dh']]
# convert tu ndarray
is2 = is2.to_array()
is2 = is2.to_numpy()
ref_dem = ref_dem.to_array()
ref_dem = ref_dem.to_numpy()
#do hypsometric binning
xdem.volume.hypsometric_binning(is2, ref_dem)
return

What you need to do is to sample the DEM elevation and store it in the point data. That way, the shape will be equal. A quick (but perhaps not long-term great) way is to instead use the IS2 elevation:

def hypsometric_binning(is2_path):
  data = xr.open_dataset(is2_path)

  binned = xdem.volume.hypsometric_binning(ddem=data["dh"], ref_dem=data["h_te_best_fit"])

  return binned

In the future, or now directly, we may want to use quantile binning (just add kind="quantile" as an argument) as this will make comparisons between glaciers much easier. Also maybe ease down on the bin count (e.g. bins=10).

Add an `environment.yml` file

Right now, we have a pretty good view of the dependencies that are required for the code to run. To make it easy to install on new computers, it's nice with an "environment file". This file is an instruction on how to recreate this environment using conda/mamba. It's relatively easy to set up:

name: SvalbardSurges
channels:
  - conda-forge
dependencies:
  - python >= 3.10
  - geopandas
  - xarray
  - netcdf4
  - matplotlib
  - rasterio
  - pip:
    - variete

Note that the pyproj hack that we made with pip is not present here. It could be added as a pip dependency, but the problem is (or at least should be) unique to your computer for some reason. Therefore it's best to not add this hack, but perhaps add that to the (not yet existing) README.md:

## Installation
pyproj seems to sometimes not work properly from the conda-forge channel, and may need to be installed with pip instead.

conda remove -f pyproj
pip install pyproj

Add dynamic glacier outline download functionality

Identical to #9 apart from the type of data. See #9 for the rationale.

The same (long term) issue is present here with data having to be downloaded manually, but could be solved by just automatically downloading from the https://npolar.no website instead.

SvalbardSurges/main.py

Lines 137 to 140 in 2fa3e53

# paths
file_name = Path("GAO_SfM_1936_1938_v3.shp")
dir_name = Path("C:/Users/eliss/SvalbardSurges/GAO")
file_path = dir_name/file_name

Here's the link to the dataset repo:
https://data.npolar.no/dataset/f6afca5c-6c95-4345-9e52-cfe2f24c7078

And here's the direct link to the glacier outline dataset:
https://api.npolar.no/dataset/f6afca5c-6c95-4345-9e52-cfe2f24c7078/_file/3df9512e5a73841b1a23c38cf4e815e3

geopandas can read shapefiles within zipfiles, so we don't have to worry about extracting them!

Add per-bin standard deviation in the hypsometric binning

It's great that you've added the spread (std)! This will help us figure out a qualitative or quantitative signal to noise ratio on the plots.

hypso_bins = { }
stddev = { }
bins = np.nanpercentile(data["dem_elevation"], np.linspace(0, 100, 11))
for year, data_subset in data.groupby(data["date"].dt.year):
# correct elevation and add it to dataset todo: better conversion
data_subset['dh_corr'] = data_subset['dh'] + 31.55
data[year] = data_subset['dh_corr']
# create hypsometric bins
hypso = xdem.volume.hypsometric_binning(ddem=data_subset["dh_corr"], ref_dem=data_subset["dem_elevation"], kind="custom", bins=bins)
hypso_bins[year] = hypso
# compute standard deviation
stddev[year] = np.std(data_subset['dh'])

Right now you're calculating a global spread (on all bins). Come to think of it, this might actually be really useful as a surging glacier should have a massive std while a nonsurging should have a small one. I think that an even more useful tool, however, would be a per-bin std:

stds = xdem.volume.hypsometric_binning(
  ddem=data_subset["dh_corr"],
  ref_dem=data_subset["dem_elevation"],
  kind="custom",
  bins=bins,
  aggregation_function=np.nanstd
)

Then you'd have two hypsometric dataframes. We could experiment on different ways to combine them. I think it should just be as simple as first renaming the median df to something more useful:

hypso.rename(columns={"value": "median"}, inplace=True)

and then adding the standard deviation as a column:

hypso["std"] = stds["value"]

`IS2_DEM_difference` caches the result but doesn't fetch it again

It's great that you're caching the result of the DEM differencing! There's however no cache retrieval logic so the process is rerun every time!

SvalbardSurges/main.py

Lines 190 to 191 in 448be05

cache_path = Path(f"cache/{label}-is2-dh.nc")
is2_dh.to_netcdf(cache_path)

This is an easy fix by simply adding something almost identical to what's in subset_is2:

SvalbardSurges/main.py

Lines 67 to 71 in 448be05

cache_path = Path(f"cache/{label}-is2.nc")
# if subset already exists open dataset
if cache_path.is_file():
return xr.open_dataset(cache_path)

Basically, make the cache_path at the start of the function, then add the cache check just after.

Move the nan filtering to the elevation change function

A good dH filtering step is now in the hypsometric interpolation function. This is the first time it's needed (right now at least), but I think this would fit better inside the elevation change function instead, as this filtering should always apply, not just when hypsometric binning is performed.

SvalbardSurges/main.py

Lines 285 to 286 in d543748

# replace no data values with nan
data = data.where(data.dem_elevation < 2000)

Speaking of filtering, the whole reason for that line was that nodata values are sampled. I wonder if this can be fixed by changing the sampling strategy a bit. The Raster.value_at_coords method has a masked keyword that should mask out any nodata.
https://geoutils.readthedocs.io/en/stable/gen_modules/geoutils.Raster.value_at_coords.html#geoutils.Raster.value_at_coords

A masked array cannot (to my knowledge) be set to an xarray dataset, so we need the .filled(np.nan) method to replace masked values with nans:

dem.value_at_coords(is2.easting, is2.northing, masked=True).filled(np.nan)

This might solve the nodata problem completely if we're lucky!

Make sure the `figures/` directory exists.

I checked the hypsometric plotting function and it saves to a figures/ directory which is great! This will however fail if the figures/ directory doesn't already exist.

plt.savefig(f'figures/{label}.png')
#plt.show()

There are two ways of fixing this:

Path("figures/").mkdir(exist_ok=True)
if not os.path.isdir("figures/"):
    os.mkdir("figures/")

Bin the data hypsometrically for each year

The hypsometric binning seems to work for all data! Hooray!

The next step would be to bin it separately for each year (we can discuss other time periods like 6/3 months, but one year is commonplace).

SvalbardSurges/main.py

Lines 293 to 295 in d543748

# hypsometric binning (ddem = elevation change, ref_dem = elevation from DEM)
hypso = xdem.volume.hypsometric_binning(ddem=data["dh"], ref_dem=data["dem_elevation"], kind="quantile", bins=10)

This is quite easy using the xarray.DataArray.dt accessor that allows us to loop through every year easily:

for year, data_subset in data.groupby(data["date"].dt.year):
    # run the hypsometric binning on `data_subset` instead of `data`
    # Save the data by appending to a list or a dataframe

Then we have to merge the results somehow into one nice dataframe or dataset. Making an xarray dataset might be easiest, as it's so easy to work with multidimensional data. We could have a dataset with three dimensions: (glacier, time, elevation band). What do you think of that, @eliskasieglova?

Another consideration is that for nice comparison between years, we might have to pre-set the bins to use. Otherwise, the estimated bins may be slightly different depending on the year, which means we'll have a really hard time comparing different years. We could pre-define the bins to be, say, percentiles:

bins = np.nanpercentile(data["dem_elevation"], np.linspace(0, 100, 11))

This will generate bins for the 0-10th percentile, 10-20th, etc.

We can use these bins by just supplying them bins=bins to the hypsometry function.

Improve outline selection

You bring up a good point with this comment!:

# if there's more glaciers with the same name select the largest one (??)

There might be cases of duplicate names. This isn't great, and as you say, requires some kind of handling.

I see two solutions, where the choice between them almost needs a real-world example:

  1. Merge the two outlines (perhaps only if they touch?)
  2. Pick the largest outline.

If you want to filter by the largest, it's pretty fast in (geo-)pandas:

gao_glacier["area"] = gao_glacier.geometry.area

return gao_glacier.sort_values("area").iloc[-1]

This first creates an area column (maybe it already exists? But best to make a new one in case we change outlines). Then it sorts by area, and extracts the last (largest) row. Note that this will fail with a very confusing error if the gao.query(...) call was empty. Therefore, after the query, I suggest to add a check that there is at least one outline in the filtered data:

assert gao_glacier.shape[0] > 0, f"Query NAME=='{glacier_name}' returned zero features."

If you want to merge all touching outlines, you can run these two:

gao_glacier = gao_glacier.dissolve().explode()

This will first take the unary union of all outlines (dissolve()) and then separate all outlines that don't touch into separate rows (explode()). Note that most metadata are lost using this approach. But we don't need that right now, so I guess it's fine!

Change top-function comments to docstrings

It's great that all functions are documented! There's a slightly better way to do this than to just add #-comments after the definition like now:

SvalbardSurges/main.py

Lines 64 to 65 in 806d6e2

def subset_is2(data, bounds, label):
# subset IS2 data by bounds, create label for them

Docstrings are special types of strings that are automatically parsed in the function creation when placed correctly. They also really help with larger code bases, as your editor will show them automatically when you start typing the function.

There are many different styling standards for docstrings. I personally prefer numpydoc as it is very easy to read and can also be parsed automatically by documentation tools. In the above case, the docstring could look something like:

def subset_is2(is2_data, bounds, label):
   """
   Subset the IceSat-2 data using the specified bounds.

   The easting and northing variables need to be loaded in memory, so this is a computationally expensive task.
   The function is cached using the label argument to speed up later calls. 

   Parameters
   ----------
   - is2_data
        The IceSat-2 data to subset
   - bounds
        The bounding box to use (requires the keys "left", "right", "bottom", "top")
   - label
        A label to assign when caching the subset result

   Returns
   -------
   A subset IS2 dataset within the given bounds.
   """
   # The actual code goes here

This might seem awfully long for something that right now is obvious, but trust me, it's worth it. In a few months, you might have so many different functions that these docstrings will really help us to keep track of what's doing what!

Note the somewhat rigid structure of the docstring convention:

  1. A one-liner describing briefly what the function does, followed by a new line
  2. Optional additional information
  3. A "Parameters" title followed by 10 dashes (sometimes the parsers are picky about the amount of dashes and sometimes they're not..)
  4. A list of arguments as - arg, followed by a new line, a tab, and then a description of the argument
  5. A "Returns" title with 7 dashes (the amount of dashes should be equal to the amount of characters in the title name, same as Parameters which is 10 characters)
  6. A brief description of what is returned

There are also more optional titles like "Raises" (for potential errors), "Examples", "See also" etc. We don't need them right now, but they might come in handy later.

Maybe provide an xarray dataset for hypsometric binning instead of a path for consistency?

There's nothing wrong with how the hypsometric binning function takes a path instead of a dataset object for the hypsometric binning. However, it is inconsistent with how the other functions work, which might make for problems further down the line.

SvalbardSurges/main.py

Lines 264 to 283 in d543748

def hypsometric_binning(data_path):
"""
Hypsometric binning of DEM.
Apart from doing the hypsometric binning, this function also creates a scatter plot of
elevation differences, and results of the hypsometric binning. Maybe I should think about
moving this to a different function.
Parameters
----------
- data-path
path to IS2 dataset containing dh and dem_elevation variables
Results
-------
Pandas dataframe of elevation bins.
"""
# open dataset with xarray
data = xr.open_dataset(data_path) # ICESat-2 data

I suggest simply changing the argument of the function to take a dataset instead of a path. That way, in the main function, you don't have to hard-code the cache path either and can just give it the is2_dh variable!

Clip the IS2 data instead of the DEM

For each glacier, we only want the points that intersect with the particular glacier, of course. I recommended one approach which ended up being very computationally demanding. The current approach is:

  1. Subset the IS2 data to the bounds of the glacier
  2. Subset the DEM to the bounds of the glacier
  3. Clip the DEM to the outline so all other values are nan
  4. Sample the DEM, and drop the values that are nan (effectively only sampling on the glacier)

This technically works, but requires that a clipped DEM is saved (for caching) each time the analysis is run. If we do this on all Svalbard's glaciers, we'd end up with about 1500 DEMs! There are some other issues:

  • The caching is slow because a DEM and subset IS2 data have to be written for each step.
  • Sampling is technically slower because each point within the bounds (but not necessarily within the outline) are sampled
  • Each glacier requires (at least) four steps (the ones enumerated above) for the dH calculation.

A much simpler approach is to just clip the IS2 points:

  1. Clip the IS2 data to the glacier outline. Perhaps it will be faster if a subsetting is done first, but those two can be done in one function
  2. Sample the DEM and remove bad values. There should be no bad values now, but you never know

The caching could be one of two approaches:

  • Save the clipped IS2 data for each glacier. Probably faster, but makes for thousands of "nc" files
  • Save the IS2 point indices that overlap for each glacier.

The latter might be a better option because xarray indexing on coordinates works really fast. So for each glacier, it'd just be a table (or a simple text file) of indices that should be used. Thus, there would be no data duplication and the cache would be much smaller.

Start considering the best way of detecting a surge

We are soon (or already) at the point where we can start working on ways to properly detect glacier surges in your data! The easiest would just be to threshold the elevation change rate, say >2 m/a is a surge. This would work for all big surges, but perhaps not the small ones.

The best approach might be to go in a supervised fashion, i.e. by collecting a surge/no surge inventory that we can use to refine the best classification from. Kääb et al. (2023) made an inventory from 2017-2022 that includes Svalbard for large surges, which overlaps nicely with your timeframe. Note that it's only the large ones, and I unfortunately don't know of any other inventory (apart from a personal one which only covers Heer Land) with the smaller ones. But perhaps we can use Kääb et al., (2023) for now! The supplementary is just a table, but I have an equivalent shapefile which will be easier to work with.

Add a `.gitignore` to ignore the cache

The cache should preferably not be in the repo, as it's by definition temporary state that should be reproducible. Also, while there are currently only very small VRTs in there, it could blow up like crazy in the future, so we should be wary of committing the cache. It's very easy to configure git to ignore the directory completely.

It's usually made through a .gitignore file:

cache/

It's a bit more convoluted when the cache is already in the history. First, you have to remove it from the history by either removing it or moving it. In a CLI, this would be:

>>> mv cache cache.backup  # Move the cache somewhere else. It needs to look like it doesn't exist anymore.
>>> git add cache/  # Contradictory to what it looks like, this REMOVES the cache (basically it adds the change, which is a removal)
>>> git add .gitignore  # This needs to be created and filled in
>>> git commit -m "Removed cache directory from history and added to .gitignore"
>>> mv cache.backup cache  # Restore the cache

I think there are convoluted git commands to do all of this, but the above commands are the most straightforward.

Split up the code into multiple modules

For now, having just one main.py works fine. In the long-term, however, this will become quite cumbersome and hard to navigate. When the time comes, I recommend splitting the code up into smaller modules that will increase readability.

For example:

  • svalbardsurges/
    • analysis.py For hypsometric binning, change detection, etc.
    • plotting.py For making figures
    • utilities.py For utility functions like generic downloading, cache handling, etc.
    • inputs A submodule for all the different inputs
      • dems.py
      • is2.py
      • outlines.py

Then the main.py function simply imports the svalbardsurges package and calls the necessary functions from within.

See some projects I've worked on for inspiration:

Consider visualizing the dH time series as a heatmap

We need good ways to visualize dH over time, and there are many different ways of doing that. Since your data are hypsometrically binned, a time series is technically two-dimensional (one is time, one is elevation). A few publications visualize this as a heatmap, i.e. an image where one axis is time and another is distance/elevation.

Here's a not-too-great example from an ongoing project of mine. I binned elevation change in percentile bins (0-10%, 10-20%, etc.) and plotted dH/dt and d²H/dt² (acceleration) against time:
image

The glacier surged in 2016 but the data are too coarse to even see it! What we do see, however, is that the lower part of the glacier gets more and more negative over time, leading up to the surge, and the upper part stops having net positive accumulation (it's not blue any more). The acceleration part is a bit interesting because it shows a relatively constant acceleration for that entire period, suggesting that the events leading up to the surge ultimately were quite predictable for a long time.

I may have some more exciting examples, but what do you think of this way of visualizing it?

Sevestre et al., (2018) used this visualization for velocities, which quite nicely show some intricacies of the surges they study.

Add dynamic DEM download function

In the long term (a few weeks), we should strive for making the code as repeatable as possible. What I mean by that is:

  1. It should be possible to run on new machines (if we want to run it on a server)
  2. The functionality should not be specific to any computer (it should work the same on any machine)
  3. It should be easy to switch glacier if we want to observe a new one.

An example of non-repeatability is the hardcoding of paths that we've downloaded manually.
I want to underscore that we've of course done this for brevity and making sure we could get a fast initial result, so this is just a natural evolution from that!

SvalbardSurges/main.py

Lines 172 to 173 in 2fa3e53

file_name = Path("S0_DTM5_2011_25163_33.tif")
dir_name = Path("C:/Users/eliss/SvalbardSurges/DEMs")

What would instead be better to do is:

  1. Assign the download URLs in the script or in an auxiliary file
  2. Download the file to the cache when required
  3. Have all the paths relate to the cache path, meaning if 1 and 2 works, the script would work on any new machine.

Downloading data with python is a bit of a mess. To my knowledge, there's no one-liner for good downloading. In one of my projects, I wrote this script:
https://github.com/erikmannerfelt/ADSvalbard/blob/091b231c3eea25239ec40ad6c58155ff51826b19/adsvalbard/utilities.py#L105-L131

A modification of this may work directly in here:

# toplevel
import tempfile
import shutil
import os
from pathlib import Path
import requests  # Might need to be installed.
# / toplevel

def download_large_file(url: str, filename: str | None = None, directory: Path | str | None = None) -> Path:
    """
    Download a file from the requested URL.

    Parameters
    ----------
    - url:
        The URL to download the file from.
    - filename:
        The output filename of the file. Defaults to the basename of the URL.
    - directory:
        The directory to save the file in. Defaults to `cache/`
    
    Returns
    -------
    A path to the downloaded file.
    """

    # If `directory` is defined, make sure it's a path. If it's not defined, default to `cache/`
    if isinstance(directory, (str, Path)):
        out_dir = Path(directory)
    else:
        out_dir = Path("cache/")

    if filename is not None:
        out_path = out_dir.joinpath(filename)
    else:
        out_path = out_dir.joinpath(os.path.basename(url))

    # If the file already exists, skip downloading it.
    if not out_path.is_file():
        # Open a data stream from the URL. This means not everything has to be kept in memory.
        with requests.get(url, stream=True) as request:
            # Stop and raise an exception if there's a problem.
            request.raise_for_status()

            # Save the file to a temporary directory first. The file is constantly appended to due to the streaming
            # Therefore, if the stream is cancelled, the file is not complete and should be dropped.
            with tempfile.TemporaryDirectory() as temp_dir:

                temp_path = Path(temp_dir).joinpath("temp.file")
                # Write the data stream as it's received
                with open(temp_path, "wb") as outfile:
                    shutil.copyfileobj(request.raw, outfile)

                # When the download is done, move the file to its ultimate location.
                shutil.move(temp_path, out_path)

    return out_path

Note that I'm using type annotations for the arguments and return types. This is optional but really helps in documenting the function. For python versions before 3.10 (I think), the first line in the script has to be this to support type annotations:

from __future__ import annotations

But since we specified the environment.yml to have a minimum supported python version of 3.10 (like my foreshadowing? 🤓), this is not needed. I just wanted to add this in case we run into issues with older python versions!

Consider the advantages/disadvantages of the different IS2 processing levels

There are three relevant processing levels of IS2 as I understand it:

  • ATL03: Photon data. A huge dataset but with the least processing that may otherwise skew our results
  • ATL06: Gridded photon data: A much smaller dataset but with some processing that we may not want
  • ATL08: Interpolated and heavily filtered data: The smallest dataset with lots of processing that we may not want.

The advantage of ATL08 (the one we currently use) is the small size, and the fact that we don't need a statistically robust approach to filter it, since filtering has already been done. I've seen some strange effects with it, however, such as mountains appearing too soon in the data (it looks like they're much wider than they really are), so this might bite us in the long term.

ATL03 would not have these issues, but would require much much more processing power for each glacier. If we can avoid this without issues, that'd be great. The problem right now, however, is that we don't know how different they are and which are more advantageous.

Maybe we can do a small case-study with all three on a few reference glaciers? For speed, we could ask Desiree or Marco to help us with the downloading and merging of different strips if that would take a long time.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.