Giter VIP home page Giter VIP logo

tineuvox's Introduction

TiNeuVox: Time-Aware Neural Voxels

ACM SIGGRAPH Asia 2022

Fast Dynamic Radiance Fields with Time-Aware Neural Voxels
Jiemin Fang1,2*, Taoran Yi2*, Xinggang Wang✉2, Lingxi Xie3,
Xiaopeng Zhang3, Wenyu Liu2, Matthias Nießner4, Qi Tian3
1Institute of AI, HUST   2School of EIC, HUST   3Huawei Cloud   4TUM


block
Our method converges very quickly. This is a comparison between D-NeRF (left) and our method (right).

block We propose a radiance field framework by representing scenes with time-aware voxel features, named as TiNeuVox. A tiny coordinate deformation network is introduced to model coarse motion trajectories and temporal information is further enhanced in the radiance network. A multi-distance interpolation method is proposed and applied on voxel features to model both small and large motions. Our framework significantly accelerates the optimization of dynamic radiance fields while maintaining high rendering quality. Empirical evaluation is performed on both syntheticand real scenes. Our TiNeuVox completes training with only 8 minutes and 8-MB storage cost while showing similar or even better rendering performance than previous dynamic NeRF methods.

Notes

  • May. 31, 2022 The first and preliminary version is realeased. Code may not be cleaned thoroughly, so feel free to open an issue if any question.

Requirements

  • lpips
  • mmcv
  • imageio
  • imageio-ffmpeg
  • opencv-python
  • pytorch_msssim
  • torch
  • torch_scatter

Data Preparation

For synthetic scenes:
The dataset provided in D-NeRF is used. You can download the dataset from dropbox. Then organize your dataset as follows.

├── data_dnerf 
│   ├── mutant
│   ├── standup 
│   ├── ...

For real dynamic scenes:
The dataset provided in HyperNeRF is used. You can download scenes from Hypernerf Dataset and organize them as Nerfies.

Training

For training synthetic scenes such as standup, run

python run.py --config configs/nerf-*/standup.py 

Use small for TiNeuVox-S and base for TiNeuVox-B. Use --render_video to render a video.

For training real scenes such as vrig_chicken, run

python run.py --config configs/vrig_dataset/chicken.py  

Evaluation

Run the following script to evaluate the model.

For synthetic ones:

python run.py --config configs/nerf-small/standup.py --render_test --render_only --eval_psnr --eval_lpips_vgg --eval_ssim 

For real ones:

python run.py --config configs/vrig_dataset/chicken.py --render_test --render_only --eval_psnr

To fairly compare with values reported in D-NeRF, metric.py is provided to directly evaluate the rendered images with uint8 values.

Main Results

Please visit our video for more rendered videos.

Synthetic Scenes

Method w/Time Enc. w/Explicit Rep. Time Storage PSNR SSIM LPIPS
NeRF ∼ hours 5 MB 19.00 0.87 0.18
DirectVoxGO 5 mins 205 MB 18.61 0.85 0.17
Plenoxels 6 mins 717 MB 20.24 0.87 0.16
T-NeRF ∼ hours 29.51 0.95 0.08
D-NeRF 20 hours 4 MB 30.50 0.95 0.07
TiNeuVox-S (ours) 8 mins 8 MB 30.75 0.96 0.07
TiNeuVox-B (ours) 28 mins 48 MB 32.67 0.97 0.04

Real Dynamic Scenes

Method Time PSNR MS-SSIM
NeRF ∼ hours 20.1 0.745
NV ∼ hours 16.9 0.571
NSFF ∼ hours 26.3 0.916
Nerfies ∼ hours 22.2 0.803
HyperNeRF 32 hours 22.4 0.814
TiNeuVox-S (ours) 10 mins 23.4 0.813
TiNeuVox-B (ours) 30 mins 24.3 0.837

Acknowledgements

This repository is partially based on DirectVoxGO and D-NeRF. Thanks for their awesome works.

Citation

If you find this repository/work helpful in your research, welcome to cite the paper and give a ⭐.

@inproceedings{TiNeuVox,
  author = {Fang, Jiemin and Yi, Taoran and Wang, Xinggang and Xie, Lingxi and Zhang, Xiaopeng and Liu, Wenyu and Nie\ss{}ner, Matthias and Tian, Qi},
  title = {Fast Dynamic Radiance Fields with Time-Aware Neural Voxels},
  year = {2022},
  booktitle = {SIGGRAPH Asia 2022 Conference Papers}
}

tineuvox's People

Contributors

jaminfong avatar taoranyi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tineuvox's Issues

About creation of dynamic scene data

Hi,

Thanks for releasing the code.

I have two questions about creating data for dynamic scenes.

  1. Will the data be created using Nerfies?

  2. If we create data in Nerfies, how do we use the data created?

I would very much appreciate your consideration.

The latest version of mmcv has no 'Config'

Hello, notice that you are using mmcv.Config in run.py. However, in the latest version of mmcv, this attribute has been removed and may cause issue. see open-mmlab/mmdeploy#1781 for details.

so for mmcv 2.0.0 version and later, maybe it is required to use mmengine instread of mmcv

# in run.py
...
# import mmcv
import mmengine 
...
...
    # cfg = mmcv.Config.fromfile(args.config)
    cfg = mmengine.Config.fromfile(args.config)

A question about alpha value

In tineuvox.py at line 324, the alpha is calculated as alpha = nn.Softplus()(density_result+self.act_shift)
But it seems that the alpha should be alpha = 1 - exp(-density_result * dist)
I am confused by this, is this a kind of approximation?

关于代码里的alpha的计算

alpha = nn.Softplus()(density_result+self.act_shift)

之前有人提问 但是点的间隔并不是像解释那样恒定的 而是随着分辨率的增大会发生变化,是按照stepdist = stepsize * self.voxel_size
stepsize也就是0.5 这是恒定的,但stepdist并不恒定 两点之差= viewdir * stepdist

how to generate render_demo.gif

Hi, thanks for your nice work. Could you please tell me how to create render_demo.git to display the changes made during training?

[question] deformation network

Thanks for open sourcing your great work!

Could you explain a bit why the deformation network does not require any regularization? e.g. small delta.

The baseline number of D-NeRF and T-NeRF.

Hi. Great work!

I am curious that where the numbers of D-NeRF and T-NeRF reported in Table 1 come from? Are they from the officially pretrained model or do you train them again and evaluate the numbers? I am asking because it seems that these numbers are much better than the reported numbers in the paper. Thanks

About the depth output

Hi,

Thanks for releasing the code.

I noticed that the depth derived by the model is not corresponding to the real world depth.
And it will also increase when the grid expands.

I'd like to ask if there is any way to get depth derivation on the real world scale?

Thanks in advance.

Could we retrive the underlying 3D volume

The temporal nerf could render pose-time variant image, i.e., input with camera pose and time slot, it could output the corresponding image. BUT an interesting question is could we retrive the underlying 3D volume givien any time slot? Note that the reconstruted volume would be different for each time slot. My reconstruction results are very poor.

Training with multi-view synced data

Hi,

Thanks for making the code public. I want to know if training on multi-view dynamic data would cause any obvious issue, and would that be as straight forward as having the same time encoding for the multi-view frames at each time instance?

Question about the coordinates motion

I visualized the delta x/y/z of the first 15 test frames from 3dprinter dataset:

dx = self.query_time(input_pts, ts, self._time, self._time_out)

with the deformation network using:

dx_marched = segment_coo(
   src=(weights.unsqueeze(-1) * dx),
   index=ray_id,
   out=torch.zeros([N, 3]),
   reduce='sum')

to get a 2D rejection:

where depth image and RGB image are both normal. But the delta looks really strange.

Does this indicates that the whole scene (including background) are treated as dynamic?

novel view generation issue. overfitting?

Hi,

Thank you for releasing the code. It is quite helpful to me.

I used my dataset which contains 140 iamges with different time stamps. But the images only from 14 training camera poses. Then I found the network can't generate good novel view images. I used the same dataset on D-nerf and it worked well. Are 14 training views too sparse for training the network? Have you found the same problem?

Thanks

How to solve: OSError: [WinError 127] 找不到指定的程序。

Backgroud:
When i am studying the tutorial "Demand forecasting with the Temporal Fusion Transformer" , and i am trying to "import lightning.pytorch" and "from pytorch_forecasting import Baseline, TemporalFusionTransformer, TimeSeriesDataSet", i encountered this problem"OSError: [WinError 127] 找不到指定的程序。"
I have read several previous issues and guess the reason is that the version of torch sparse, torch scatter, torch cluster and python is incompatible with each other. But the previous solutions are useless to me.

So I wanna ask how to adjust my versions to solve this problem?

Evironment:
OS: [Windows 11]
python version: [3.9.18]

Version:
image
image

Attachment: the full bug:
Traceback (most recent call last):
File "E:\anaconda3\envs\py39_env\lib\site-packages\IPython\core\interactiveshell.py", line 3550, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
from pytorch_forecasting import Baseline, TemporalFusionTransformer, TimeSeriesDataSet
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self.system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\pytorch_forecasting_init
.py", line 4, in
from pytorch_forecasting.data import (
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self.system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\pytorch_forecasting\data_init
.py", line 7, in
from pytorch_forecasting.data.encoders import (
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\pytorch_forecasting\data\encoders.py", line 24, in
from pytorch_forecasting.utils import InitialParameterRepresenterMixIn
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\pytorch_forecasting\utils.py", line 10, in
import lightning.pytorch as pl
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self.system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\lightning_init
.py", line 20, in
from lightning.pytorch.callbacks import Callback # noqa: E402
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self.system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\lightning\pytorch_init
.py", line 27, in
from lightning.pytorch.callbacks import Callback # noqa: E402
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self.system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\lightning\pytorch\callbacks_init
.py", line 14, in
from lightning.pytorch.callbacks.batch_size_finder import BatchSizeFinder
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\lightning\pytorch\callbacks\batch_size_finder.py", line 24, in
from lightning.pytorch.callbacks.callback import Callback
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\lightning\pytorch\callbacks\callback.py", line 22, in
from lightning.pytorch.utilities.types import STEP_OUTPUT
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\lightning\pytorch\utilities\types.py", line 40, in
from torchmetrics import Metric
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self.system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\torchmetrics_init
.py", line 22, in
from torchmetrics import functional # noqa: E402
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self.system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\torchmetrics\functional_init
.py", line 14, in
from torchmetrics.functional.audio._deprecated import _permutation_invariant_training as permutation_invariant_training
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self.system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\torchmetrics\functional\audio_init
.py", line 14, in
from torchmetrics.functional.audio.pit import permutation_invariant_training, pit_permutate
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\torchmetrics\functional\audio\pit.py", line 22, in
from torchmetrics.utilities import rank_zero_warn
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self.system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\torchmetrics\utilities_init
.py", line 14, in
from torchmetrics.utilities.checks import check_forward_full_state_property
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\torchmetrics\utilities\checks.py", line 25, in
from torchmetrics.metric import Metric
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\torchmetrics\metric.py", line 30, in
from torchmetrics.utilities.data import (
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\torchmetrics\utilities\data.py", line 22, in
from torchmetrics.utilities.imports import _TORCH_GREATER_EQUAL_1_12, _TORCH_GREATER_EQUAL_1_13, _XLA_AVAILABLE
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\torchmetrics\utilities\imports.py", line 51, in
TORCHAUDIO_GREATER_EQUAL_0_10: Optional[bool] = compare_version("torchaudio", operator.ge, "0.10.0")
File "E:\anaconda3\envs\py39_env\lib\site-packages\lightning_utilities\core\imports.py", line 77, in compare_version
pkg = importlib.import_module(package)
File "E:\anaconda3\envs\py39_env\lib\importlib_init
.py", line 127, in import_module
return _bootstrap.gcd_import(name[level:], package, level)
File "E:\anaconda3\envs\py39_env\lib\site-packages\torchaudio_init
.py", line 1, in
from torchaudio import ( # noqa: F401
File "E:\PyCharm Community Edition 2023.1\plugins\python-ce\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self.system_import(name, *args, **kwargs)
File "E:\anaconda3\envs\py39_env\lib\site-packages\torchaudio_extension_init
.py", line 43, in
_load_lib("libtorchaudio")
File "E:\anaconda3\envs\py39_env\lib\site-packages\torchaudio_extension\utils.py", line 61, in load_lib
torch.ops.load_library(path)
File "E:\anaconda3\envs\py39_env\lib\site-packages\torch_ops.py", line 852, in load_library
ctypes.CDLL(path)
File "E:\anaconda3\envs\py39_env\lib\ctypes_init
.py", line 374, in init
self._handle = _dlopen(self._name, mode)
OSError: [WinError 127] 找不到指定的程序。

Training Problem

Hi,

Thanks for releasing the code.

When I run python run.py --config configs/nerf-*/standup.py to training,
Rasing this problem: run.py: error: unrecognized arguments: configs/nerf-small/standup.py

run_problem

So I tried python run.py --config ./configs/nerf-*/standup.py but got the same error.

change_dir

Do you occur this problem, and how to solve it ?

I'm running in the following environment

Ubuntu 20.04.4 LTS
PyTorch version : 1.10.2
CUDA version : 10.1

The reproduction of NSFF in the hypernerf_vrig dataset

Hello author, thank you very much for such a good job. I have a request to ask you. One of my papers was reworked, and a reviewer asked me to add the results on hypernerf_vrig, but this was a bit difficult for me. So, I was wondering if you could send me your repro code on NSFF. Thank you very much.

Depth Units

Thanks for the great project. I wonder if you can tell me what units the rendered depth is in and how I could get it into the same units as was used for the camera positions.

The version of pytorch

Hello. I'm currently trying on your method and getting warning like
"warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")"

and error like "RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1."

It would be grateful if you could let me know what torch & cuda version you used, and also the GPU you used for running your code.

Some Running Bugs

I meet some bugs when I run the following command "python run.py --config=configs/nerf-small/standup.py --render_test --render_only --eval_psnr --eval_lpips_vgg --eval_ssim"

self._handle = _dlopen(self._name, mode)

OSError: [WinError 127] 找不到指定的程序。

Could anyone give some help? Thanks in advance.

Question regarding canonical space visualised in paper

Hey! Thanks for your work.

I was wondering how you visualised the canonical space in Figure 9 of your paper? The radiance network expects the time embedding, so I am not sure how to decode the canonical space without any time input. Hope you can give me some hints.

Thanks!

FAILED: render_utils_kernel.cuda.o

D:/NERF/TiNeuVox-main/lib/cuda/render_utils_kernel.cu(368): error: calling a host function("pow<double, float, (int)0> ") from a global function("raw2alpha_cuda_kernel ") is not allowed

D:/NERF/TiNeuVox-main/lib/cuda/render_utils_kernel.cu(368): error: identifier "pow<double, float, (int)0> " is undefined in device code

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\include\crt\math_functions.hpp(980): error: calling a host function("pow<double, float, (int)0> ") from a global function("raw2alpha_backward_cuda_
kernel ") is not allowed

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\include\crt\math_functions.hpp(980): error: identifier "pow<double, float, (int)0> " is undefined in device code

4 errors detected in the compilation of "D:/NERF/TiNeuVox-main/lib/cuda/render_utils_kernel.cu".
render_utils_kernel.cu
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "C:\Users\80920.conda\envs\pymarl\lib\site-packages\torch\utils\cpp_extension.py", line 1673, in _run_ninja_build
env=env)
File "C:\Users\80920.conda\envs\pymarl\lib\subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "run.py", line 17, in
from lib import tineuvox, utils
File "D:\NERF\TiNeuVox-main\lib\tineuvox.py", line 20, in
verbose=True)
File "C:\Users\80920.conda\envs\pymarl\lib\site-packages\torch\utils\cpp_extension.py", line 1091, in load
keep_intermediates=keep_intermediates)
File "C:\Users\80920.conda\envs\pymarl\lib\site-packages\torch\utils\cpp_extension.py", line 1302, in _jit_compile
is_standalone=is_standalone)
File "C:\Users\80920.conda\envs\pymarl\lib\site-packages\torch\utils\cpp_extension.py", line 1407, in _write_ninja_file_and_build_library
error_prefix=f"Error building extension '{name}'")
File "C:\Users\80920.conda\envs\pymarl\lib\site-packages\torch\utils\cpp_extension.py", line 1683, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'render_utils_cuda'

Wrong resize with half-res

In file lib/load_dnerf.py, H and W are swapped when converting to half-resolution.

imgs_half_res[i] = cv2.resize(img, (H, W), interpolation=cv2.INTER_AREA)

Now it works because H and W are equal, but with images of different resolution it raises exception.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.