marbl-ecosys / feisty Goto Github PK
View Code? Open in Web Editor NEWPython implementation for the Fisheries Size and Functional Type model (FEISTY)
Home Page: https://marbl-ecosys.github.io/feisty
License: MIT License
Python implementation for the Fisheries Size and Functional Type model (FEISTY)
Home Page: https://marbl-ecosys.github.io/feisty
License: MIT License
In compute_encounter()
, we don't want to rely strictly on link.is_demersal
to determine when t_frac_prey_pred = 1.0 - t_frac_pelagic_pred
because Sd
are larval and pelagic. One solution is to introduce link.is_small
and then only modify t_frac_prey_pred
when link.is_demersal and not link.is_small
.
Possible low hanging fruit:
dask
in driverInstead of "small", "medium", and "large" (which gets confusing when talking about "Large Pelagic" vs "Foragers") should we use "larval", "juvenile", and "adult"? Open to other suggestions as well, but it would be really nice to drop the "small Large Pelagic" and "large Large Pelagic" terminology.
The build-docs
test failed on #25 despite passing earlier and no changes that should have affected this particular test. The error message points to missing black
in the environment:
Exception occurred:
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/pkg_resources/__init__.py", line 777, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'black' distribution was not found and is required by the application
The full traceback has been saved in /tmp/sphinx-err-5ghsjkqs.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
looking at the conda
environment being set up, I don't really understand why it was omitted... here are differences from the conda output (<
is output from the passing test, >
is from the failing test linked previously):
< + appdirs 1.4.4 pyh9f0ad1d_0 conda-forge/noarch 13kB
< + black 21.5b2 pyhd8ed1ab_0 conda-forge/noarch 111kB
< + filelock 3.5.0 pyhd8ed1ab_0 conda-forge/noarch 12kB
---
> + filelock 3.5.1 pyhd8ed1ab_0 conda-forge/noarch 12kB
< + ipykernel 6.9.0 py38he5a9106_0 conda-forge/linux-64 186kB
< + ipython 8.0.1 py38h578d9bd_0 conda-forge/linux-64 1MB
---
> + ipykernel 6.9.1 py38he5a9106_0 conda-forge/linux-64 186kB
> + ipython 8.0.1 py38h578d9bd_1 conda-forge/linux-64 1MB
< + jupyter_core 4.9.1 py38h578d9bd_1 conda-forge/linux-64 82kB
---
> + jupyter_core 4.9.2 py38h578d9bd_0 conda-forge/linux-64 82kB
< + mypy_extensions 0.4.3 py38h578d9bd_4 conda-forge/linux-64 11kB
< + pathspec 0.9.0 pyhd8ed1ab_0 conda-forge/noarch 31kB
< + regex 2022.1.18 py38h497a2fe_0 conda-forge/linux-64 383kB
< + typed-ast 1.5.2 py38h497a2fe_0 conda-forge/linux-64 218kB
So filelock
, ipykernel
, and jupyter_core
all incremented the patch version number and there's a change in the hash of ipython
despite remaining at 8.0.1; but now appdirs
, black
, mypy_extensions
, pathspec
, regex
, and typed-ast
are all missing.
I'll rerun the test and see if the problem is reproducible. If it is, we may just need to add some (or all) of the above packages to ci/environment.yml
Appendix A: Full traceback
Traceback (most recent call last):
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/jupyter_book/sphinx.py", line 167, in build_sphinx
app.build(force_all, filenames)
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/sphinx/application.py", line 352, in build
self.builder.build_update()
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/sphinx/builders/__init__.py", line 296, in build_update
self.build(to_build,
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/sphinx/builders/__init__.py", line 360, in build
self.write(docnames, list(updated_docnames), method)
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/sphinx/builders/__init__.py", line 534, in write
self._write_serial(sorted(docnames))
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/sphinx/builders/__init__.py", line 544, in _write_serial
self.write_doc(docname, doctree)
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/sphinx/builders/html/__init__.py", line 605, in write_doc
self.docwriter.write(doctree, destination)
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/docutils/writers/__init__.py", line 78, in write
self.translate()
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/sphinx/writers/html.py", line 71, in translate
self.document.walkabout(visitor)
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/docutils/nodes.py", line 214, in walkabout
if child.walkabout(visitor):
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/docutils/nodes.py", line 214, in walkabout
if child.walkabout(visitor):
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/docutils/nodes.py", line 214, in walkabout
if child.walkabout(visitor):
[Previous line repeated 1 more time]
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/docutils/nodes.py", line 206, in walkabout
visitor.dispatch_visit(self)
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/sphinx/util/docutils.py", line 468, in dispatch_visit
method(node)
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/sphinx/writers/html5.py", line 405, in visit_literal_block
highlighted = self.highlighter.highlight_block(
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/sphinx/highlighting.py", line 133, in highlight_block
lexer = self.get_lexer(source, lang, opts, force, location)
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/sphinx/highlighting.py", line 117, in get_lexer
lexer = get_lexer_by_name(lang, **opts)
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/pygments/lexers/__init__.py", line 115, in get_lexer_by_name
for cls in find_plugin_lexers():
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/pygments/plugin.py", line 54, in find_plugin_lexers
yield entrypoint.load()
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/pkg_resources/__init__.py", line 2464, in load
self.require(*args, **kwargs)
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/pkg_resources/__init__.py", line 2487, in require
items = working_set.resolve(reqs, env, installer, extras=self.extras)
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/pkg_resources/__init__.py", line 777, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'black' distribution was not found and is required by the application
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/share/miniconda/envs/dev-feisty/bin/jupyter-book", line 10, in <module>
sys.exit(main())
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/jupyter_book/cli/main.py", line 323, in build
builder_specific_actions(
File "/usr/share/miniconda/envs/dev-feisty/lib/python3.8/site-packages/jupyter_book/cli/main.py", line 535, in builder_specific_actions
raise RuntimeError(_message_box(msg, color="red", doprint=False)) from result
RuntimeError:
===============================================================================
There was an error in building your book. Look above for the cause.
===============================================================================
@rmshkv and @kristenkrumhardt reported issues with [https://github.com/marbl-ecosys/feisty/blob/main/examples/Run%20Multiple%20Years%20(highres).ipynb](Run Multiple Years (highres).ipynb) - running multiple years (1980 - 1985ish) did not show any interannual variability. Looking closer, it also appeared that the forcing provided was perpetually in northern hemisphere winter (the arctic biomass was plummeted while antarctic biomass flourished).
Digging deeper, I realized the problem was in
ds_out = xr.map_blocks(
config_and_run_from_dataset,
ds,
args=(
ds_ic,
len(template['time']),
start_date,
ignore_year_in_forcing,
settings_in,
diagnostic_names,
max_output_time_dim,
method,
),
template=template,
).persist()
wait(ds_out)
Specifically, ds
was created via xr.open_mfdataset()
and, when multiple time series files were read, ds
was chunked in time
-- xr.map_blocks
then only passed a single time chunk (it was consistently passing the 2021 year). When the model year predates the forcing data, we just use the first time step of the forcing data set. This led directly to the behavior noted in the first paragraph; rather than forcing with the correct year, we were repeating Jan 1, 2021 forcing.
A fix is to replace the xr.open_mfdataset()
with multiple xr.open_dataset()
calls followed by xr.merge()
; I could not figure out the correct argument to have open_mfdataset()
combine time
from multiple files into a single chunk and rechunking after the open_mfdataset()
is far more computationally expensive than manually merging.
As coded, the "to" field is indexed in fish space, but the from field is only indexed in group space. We need the group index for the from field because we need biomass
, but we also need the fish index for growth_rate
and reproduction_rate
want lcarrying_capacity
true when carrying_capacity
is not 0
The way reproduction_routing
is defined, you can only loop through it in a for
loop once; a second for
loop would start at the end of the index space and therefore never enter the loop. The nuances are still a little unclear to me, but basically the loop in compute_recruitment
are only be processed on the first time step. I found a similar problem on stack overflow and the solution provided lets us call compute_recruitment()
as many times as we'd like.
E.g., Currently the API for simulation.run
requires the number of timestep; we might want to make this more human-friendly.
The end goal of this update would be to provide forcing in a manner similar to stream files: for each stream, the user provides a list of files covering the appropriate time period, and a list of variables that appears in each file. E.g.
For each stream, FEISTY would loop through the list of files and open them sequentially, stopping when it has found the file(s) that contain dates bracketing the model start date. Once the model has advanced beyond the last date in the stream file, the next file would be opened to allow continued interpolation.
One possibility might be to require forcing_ds.isel(forcing_time=0).forcing_time >= model_time
, and basically setting forcing_ds = forcing_ds.isel(forcing_time = slice(1,))
when that condition is not met. Another requirement would be the concatenation of the next forcing file if len(forcing_ds.forcing_time) == 1
so forcing_ds.isel(forcing_time=0).forcing_time <= model_time < forcing_ds.isel(forcing_time=1).forcing_time
.
So initialization might be
def init_forcing(stream_file_list, model_start_date):
forcing_dses = []
for file in stream_file_list:
forcing_dses.append(xr.open_dataset(stream_file))
if (forcing_dses[-1].forcing_time[-1] < model_start_date):
forcing_dses = [forcing_dses[-1].isel(forcing_time=[-1])]
else:
return xr.merge(forcing_dses)
If start_date
is earlier than any forcing date, then this will return the contents of the first file and we'll need to make sure the forcing interpolation does not extrapolate from the first two time levels. If start_date
is later than every forcing field, then this function doesn't return anything... that edge case should obviously be addressed :) There is an additional step needed to enforce forcing_ds.isel(forcing_time=0).forcing_time >= model_time
but I think that should go in its own function because we will need to call it throughout the run.
I'm not entirely clear how this will play with the idea of cyclic forcing. The example I keep coming up with is repeating forcing from year 1960 of monthly data. This will require providing a forcing dataset with 14 time levels: Dec 1959, all 12 months of 1960, and January of 1961. (And the edge case of "what if these years are spread across two or three files?") I'll focus on the non-cyclic forcing first, and then come back to this function.
In the 1 degree companion run, there are some grid cells where biomass
for some functional types drops uncontrollably:
in the 0.1 degree run, there is uncontrolled growth in some functional types while others go negative:
I think we need to introduce a floor for biomass, at which point we do not let it drop any further. It might be the case that preventing negative values will be sufficient to keep other groups from growing unbounded, otherwise will we need a cap?
For completeness, I'll also mention that we should investigate the forcing datasets at these points; it might be the case that the growth / loss is a natural response to the forcing we are providing.
Currently, ecosystem.py
and domain.py
use module variables and are therefore not threadsafe.
Starting point for identifying tests that require improvement
pytest -v -m weak
as I have labeled obviously deficient tests with a "weak" fixture.
The forcing uses np.arange(0.0, nt + 1.0, 1.0)
for time
but the driver uses np.arange(1.0, nt + 1.0, 1.0)
. We actually want both to be np.arange(0.0, nt, 1.0)
in order to compute the forcing correctly and to read the correct forcing at each time step.
In compute_consumption()
, enc
should be encounter_rate
, not consumption_rate
Rather than assuming the run (and forcing) starts from time=0
with units day
, we should use something like cftime
As a minimum, we should accept and return 2D arrays for forcing variables.
We could consider modifying MARBL output streams to write out the need variables to run FEISTY offline.
Here's an example notebook showing post-processing of CESM1 output for FESITY:
https://github.com/matt-long/fish-offline/blob/main/notebooks/proc-cesm-dple-fields.ipynb
It would be nice if one could tell how FEISTY was configured just from looking at the output zarr
file
ci/environment.yml is using
- git+https://github.com/dask/dask-mpi.git
- '--editable=..'
in the pip:
block because there are some bugfixes that came after the 2022.4.0 release that we need. Once the next release comes out, we should just let conda install dask-mpi
from conda-forge
instead of having pip install it from github
@RemyDenechere ran into the following error when running Multiple Forcing Files.ipynb
with a new install of dev-feisty
:
ValueError: cannot re-index or align objects with conflicting indexes found for the following coordinates: 'X' (2 conflicting indexes)
Conflicting indexes may occur when
- they relate to different sets of coordinate and/or dimension names
- they don't have the same type
- they may be used to reindex data along common dimensions
(this was coming from the feisty.config_and_run_from_yaml(feisty_config)
in cell [3]
). Backing xarray
from 2022.11.0
to 2022.3.0
(and also using python=3.8
instead of 3.10
, though I don't think that was a necessary change) allowed the cell to work.
I'd like to modify the config_and_run_from_yaml()
routine to continue to work with the latest xarray
but I haven't dug into the error message much yet so I don't know what this will involve. For now, we're just using the older version of xarray.
The code implicitly assumes a timestep of 1 day. We should change the time units to seconds and make the timestep a settable parameter in the driver.
The default settings file (feisty/core/default_settings.yml
) has blocks like
food_web:
# [snip some intervening keys]
# large demersal
- predator: Ld
prey: Mf
preference: 0.375 #should be A*D = (school_avoid * generalist_reduct)
- predator: Ld
prey: Mp
preference: 0.75 #should be D (generalist_reduct)
As the comments indicate, .0375 is really a generalist_reduct
term times a school_avoid
term, while the 0.75
is just the generalist_reduct
term. We'd like to introduce a way to set those two parameters elsewhere in the file, and then have these food_web.preference
terms change if the user changes the other parameters
Currently, benthic prey are updated (i.e., forward Euler integration) at the beginning of the feisty_instance.compute_tendency
routine. We should timestep benthic_prey
in concert with everything else.
When runningFEISTY_driver
I get the following message in the log file:
Starting compute at 10:54:30
2022-09-25 10:54:30,721 - distributed.scheduler - INFO - Receive client connection: Client-b929bd22-3cf2-11ed-82f1-3cecef1a5636
2022-09-25 10:54:30,721 - distributed.core - INFO - Starting established connection
/glade/work/mclong/miniconda3/envs/dev-feisty/lib/python3.9/site-packages/distributed/worker.py:2936: UserWarning: Large object of size 2.29 MiB detected in task graph:
([('X',), <xarray.IndexVariable 'X' (X: 300129)>
a ... =object), {}],)
Consider scattering large objects ahead of time
with client.scatter to reduce scheduler burden and
keep data on workers
future = client.submit(func, big_data) # bad
big_future = client.scatter(big_data) # good
future = client.submit(func, big_future) # good
warnings.warn(
Starting run() at 10:54:42
Starting run() at 10:54:42
Starting run() at 10:54:42
Starting run() at 10:54:42
Starting run() at 10:54:42
Starting run() at 10:54:42
...
The relevant block of code is here:
print(f'Starting compute at {time.strftime("%H:%M:%S")}')
with Client() as c:
# map_blocks lets us run in parallel over our dask cluster
ds_out = xr.map_blocks(
feisty.config_and_run_from_dataset,
ds,
args=(
run_settings.nsteps,
run_settings.start_date,
run_settings.ignore_year_in_forcing,
run_settings.settings_in,
run_settings.diagnostic_names,
run_settings.max_output_time_dim,
run_settings.method,
),
template=template,
).compute()
config_and_run_from_yaml()
is hard-wired to assume that biomass
is the only field that should be written to the output file. Changing
- diagnostic_names = []
+ diagnostic_names = input_dict.get('diagnostic_names', [])
in
feisty/feisty/offline_driver.py
Lines 672 to 678 in f5666fa
should do the trick
CI has been failing since June 14th, and it's only the python 3.7 matrix. Sample error:
ImportError while importing test module '/home/runner/work/feisty/feisty/tests/test_core.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
../../../micromamba-root/envs/dev-feisty/lib/python3.7/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_core.py:4: in <module>
import feisty
feisty/__init__.py:10: in <module>
from . import offline_driver as offline_driver_mod, testcase
feisty/offline_driver.py:7: in <module>
import xarray as xr
../../../micromamba-root/envs/dev-feisty/lib/python3.7/site-packages/xarray/__init__.py:1: in <module>
from . import testing, tutorial, ufuncs
../../../micromamba-root/envs/dev-feisty/lib/python3.7/site-packages/xarray/tutorial.py:13: in <module>
from .backends.api import open_dataset as _open_dataset
../../../micromamba-root/envs/dev-feisty/lib/python3.7/site-packages/xarray/backends/__init__.py:17: in <module>
from .zarr import ZarrStore
../../../micromamba-root/envs/dev-feisty/lib/python3.7/site-packages/xarray/backends/zarr.py:22: in <module>
import zarr
../../../micromamba-root/envs/dev-feisty/lib/python3.7/site-packages/zarr/__init__.py:3: in <module>
from zarr.convenience import (consolidate_metadata, copy, copy_all, copy_store,
../../../micromamba-root/envs/dev-feisty/lib/python3.7/site-packages/zarr/convenience.py:8: in <module>
from zarr._storage.store import data_root, meta_root, assert_zarr_v3_api_available
../../../micromamba-root/envs/dev-feisty/lib/python3.7/site-packages/zarr/_storage/store.py:11: in <module>
from zarr.context import Context
../../../micromamba-root/envs/dev-feisty/lib/python3.7/site-packages/zarr/context.py:2: in <module>
from typing import TypedDict
E ImportError: cannot import name 'TypedDict' from 'typing' (/home/runner/micromamba-root/envs/dev-feisty/lib/python3.7/typing.py)
Easy "fix" would be to just drop support for 3.7, though I'd like to understand what part of our environment got updated and broken in mid-June...
Ideally, there would be a flag to indicate whether the forcing is cyclic or not - one test @cpetrik has for the matlab code runs for 20 years but the forcing is just 1 year of data that repeats.
@cpetrik found a handful of parameters that didn't match the matlab code and has helpfully provided an updated yaml file for us.
We need to decide on a framework to test the Python implementation against the original Matlab. One option would be to run something from the testcase
forcing.
cc @cpetrik
When we read POP output on the native POP grid, we use .stack()
to perform a dimension reduction to convert (nlat, nlon)
-> X
. The resulting X
coordinate is a MultiIndex
object, which can not be written to netCDF. I wonder if we just a way to track whether X
is a stacked dimension, and, if so, unstack it after config_and_run_from_dataset()
returns? I can't think of a situation where we would want to read in 2D inputs and then keep the output in 1D.
explore broader set of forcing scenarios, ICs, etc. to more fully exercise options and verify against Matlab solutions.
@rmshkv was trying to understand the units we use for mass, and digging through the code it's unclear. She was asking about fish yield units, and I tried (unsuccessfully) to trace it through the code. I did notice that, on the forcing side, the zooplankton biomass is called zooC
but it's actually the zooplankton wet weight per area (in grams per square meter):
# Conversion from Colleen:
# 1e9 nmol in 1 mol C
# 1e4 cm2 in 1 m2
# 12.01 g C in 1 mol C
# 1 g dry W in 9 g wet W (Pauly & Christiansen)
nmol_cm2_TO_g_m2 = 1e-9 * 1e4 * 12.01 * 9.0
with
forcing_ds['zooC'].data = forcing_ds['zooC'].data * nmol_cm2_TO_g_m2
[Also, why are we using grams and meters rather than grams and centimeters (cgs) or kilograms and meters (mks)?]
The functions in process.py use Xarray, but we might want to refactor to remove this dependency for performance reasons.
At one point I thought we didn't want any classes in pelagic_demersal_coupling_apply_pref_type_keys
in order to match the matlab configuration (that turned out to incorrect), but the code didn't run properly if the yaml file contained an empty list.
We want to spin up the CESM-forced runs by repeating a single year of forcing with day_offset = -0.5
(because POP provides the date as the end of the averaging interval instead of the middle), but if I don't set day_offset = -1
then biomass is NaN
globally / for every functional type after the first time step.
I think something weird is happening when we try to prepend the last day in the forcing dataset to the beginning (changing the date to Dec 31, 0000), but I haven't dug deep enough to know exactly what is going wrong
The POP forcing data is all single precision, but we are casting biomass as a float64
; a float32
would be a factor of 2 savings in memory, probably compute faster, and not really lose any precision in the final output.
Currently FEISTY is just dumping output to netcdf at the end of every time step; monthly averages should be sufficient and will reduce output size by quite a bit.
It may be the case that we have global POP output for forcing, but are only interested in FEISTY results from a specific region. An option to reduce the spatial grid before running could save a lot of time / computer resources.
We should remove
- predator: Lp
prey: Md
preference: 1.
from the food web section of the settings file. (@cpetrik does this sound right to you? Our matlab comparisons were successful, so I think our food web matches yours.)
There are some assert
statements in other parts of the code that force some linking types to have unique indices; that should be matched in the reproduction_routing
type
After we accept #30, a good clean up step might be to use a single memory buffer for model output, write to disk after each time_levs_per_ds
and reset that buffer.
I managed to create a notebook that, during pre-commit
would fail the black-jupyter
check and then the modified version would fail prenotebook
(resulting in a return to the original state). I think it was a line that was being broken up because it was 90 characters long, but then prenotebook
thought it could all fit in a single line. I tried modifying the prenotebook
settings to limit line length, but it must could characters differently than black-jupyter
because I couldn't find a value for --line-width
that made both packages happy.
I think we should remove prenotebook
and only rely on black-jupyter
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.