Giter VIP home page Giter VIP logo

hcn-pytorch's Introduction

A PyTorch Reproduction of HCN

Co-occurrence Feature Learning from Skeleton Data for Action Recognition and Detection with Hierarchical Aggregation. Chao Li, Qiaoyong Zhong, Di Xie, Shiliang Pu, IJCAI 2018.

Arxiv Preprint

Features

1. Dataset

  • NTU RGB+D: Cross View (CV), Cross Subject (CS)
  • SBU Kinect Interaction
  • PKU-MMD

2. Tasks

  • Action recognition
  • Action detection

3. Visualization

  • Visdom supported.

Prerequisites

Our code is based on Python3.5. There are a few dependencies to run the code in the following:

  • Python >= 3.5
  • PyTorch == 0.4.0
  • torchnet
  • Visdom
  • Other version info about some Python packages can be found in requirements.txt

Usage

Data preparation

NTU RGB+D

To transform raw NTU RGB+D data into numpy array (memmap format ) by this command:

python ./feeder/ntu_gendata.py --data_path <path for raw skeleton dataset> --out_folder <path for new dataset>
Other Datasets

Not supported now.

Training

Before you start the training, you have to launch visdom server.

python -m visdom

To train the model, you should note that:

  • --dataset_dir is the parents path for all the datasets,
  • --num the number of experiments trials (type: list).
python main.py --dataset_dir <parents path for all the datasets> --mode train --model_name HCN --dataset_name NTU-RGB-D-CV --num 01

To run a new trial with different parameters, you need to:

  • Firstly, run the above training command with a new trial number, e.g, --num 03, thus you will got an error.
  • Secondly, copy a parameters file from the ./HCN/experiments/NTU-RGB-D-CV/HCN01/params.json to the path of your new trial "./HCN/experiments/NTU-RGB-D-CV/HCN03/params.json" and modify it as you want.
  • At last, run the above training command again, it will works.

Testing

python main.py --dataset_dir <parents path for all the datasets> --mode test --load True --model_name HCN --dataset_name NTU-RGB-D-CV --num 01

Load and Training

You also can load a half trained model, and start training it from a specific checkpoint by the following command:

python main.py --dataset_dir <parents path for all the datasets> --mode load_train --load True --model_name HCN --dataset_name NTU-RGB-D-CV --num 01 --load_model <path for  trained model>

Results

Table

The expected Top-1 accuracy of the model for NTU-RGD+D are shown here (There is an accuracy gap. I am not the author of original HCN paper, the repo was reproduced according to the paper text and have not been tuned carefully):

Model Normalized
Sequence
Length
FC
Neuron
Numbers
NTU RGB+D
Cross Subject (%)
NTU RGB+D
Cross View (%)
HCN[1] 32 256 86.5 91.1
HCN 32 256 84.2 89.2
HCN 64 512 84.9* 90.9*

[1] http://arxiv.org/pdf/1804.06055.pdf

Figures

  • Loss & accuracy[CV]

Confusion matrix

- Loss & accuracy[CS]

Reference

[1] Chao Li, Qiaoyong Zhong, Di Xie, Shiliang Pu. Co-occurrence Feature Learning from Skeleton Data for Action Recognition and Detection with Hierarchical Aggregation. IJCAI 2018.

[2] yysijie/st-gcn: referred for some code of dataset processing.

hcn-pytorch's People

Contributors

anirudh257 avatar dependabot[bot] avatar huguyuehuhu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

hcn-pytorch's Issues

metrics_mean = {metric: np.mean([x[metric] for x in summ]) for metric in summ[0]}

Hello,
I met a problem with this line. The error is:
File "/appl/soft/ai/miniconda3/envs/pytorch-1.3.0-1/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 3257, in mean
out=out, **kwargs)
File "/appl/soft/ai/miniconda3/envs/pytorch-1.3.0-1/lib/python3.7/site-packages/numpy/core/_methods.py", line 161, in _mean
ret = ret.dtype.type(ret / rcount)
AttributeError: 'torch.dtype' object has no attribute 'type'

I did try torch.mean(torch.stack()), however it still not work. Do you have some seggestions?

used 2D skeleton dataset??

Keeping other unchanged, it is modified to use 2D skeleton data as input for training, and the best model after 3D skeleton training is used as the pre-training model.
However, the accuracy after training is very low, less than 2%.
Is there any method to improve the accuracy of 2D skeleton input?
Thank you very much

Figures

Hi,
Thank you for sharing your work!
How did you generate the figures of loss & accuracy and confusion matrix using visdom. Could you answer for me? Thanks!

Using OpenPose for getting skeleton data from ntu rgb videos

Initially , I edited your read_xyz function to get 2D coordinates instead of 3D coordinates from the raw skeleton data downloaded from ntu dataset website. I used two classes "A44" and "A26". I changed the num_class to 2 and input_channels to 2. I trained the model and the evaluation accuracy was 99.5%.

Now for the same two classes, I extracted 2D skeleton data from OpenPose and preprocessed the data. When I ran the code for training the model , I see that the training and evaluation accuracy are around 45 - 50 %.

Please let me know why this is happening?

demo?

I implemented a skeleton extraction, and then motion recognition demo;
I modified the use of a smaller number of key points (14)for motion recognition, retraining,there are 6 classes。The training accuracy was 0.90102261。
also I modified the network parameters, window size is 16, person num is 1。
But demo's action recognition is poor.
I accumulated 16 frames of input into the HCN, the next time from the previous accumulation of half the beginning of the accumulation。

Please give me some advice, thank you

download NTU RGB+D dataset??

Thanks for the work you've done!
Due to an error of the NUT data set server, I have not applied for the download of the data set. Could you please send me the data set?

thanks very much~

Current state working?

Hi,

I have trouble running the code of the current state in the repo.

  1. To preprocess the data, the imports in feeder have to be adapted.
  2. PyTorch should not be installed via pip. Pip forces a version that is maybe not compatible to the GPU and the system. The corresponding version should be installed manually, to match the hardware. In my case the proposed pip version introduced an delay of 5 minutes after loading the dataset and did not work.
  3. Maybe add a note, that a running visdom server is needed.
  4. Maybe there is an issue, with certifi from requirements.txt, but I am not sure, wheter it is an problem on my side, so i excluded that from the pull request.

I make a pull request fixing 2,3. The imports are very confusing, maybe you could fix that.

Thanks & Best,
Max

Only using 2 people skeleton sequence?

Hi,

I want to ask that are you only using 2 people skeleton sequence on training and testing?
If I want to use the data include 1 person and multi-people, how can I do?

Thanks!

Trouble with running

After transforming the raw data and moving the json file, I tried to run the training program in the README. But it keep sending "[Errno 99] Cannot assign requested address" all the time.
At first, I thought it's due to I didn't launch visdom, I use "python -m visdom.server" to launch, it informs me "Application Started" and I CANNOT input instruction after this information.
I hope I could get some help about how to deal with the error "[Errno 99] Cannot assign requested address" and launching visdom.

Difference in reported accuracy

Hi,
I ran you experiemnt in following setting

HCN[1] | 32 | 256
-- | -- | -- However, my accuracy is only 83.8 for cross subject evaluation.

Also for

HCN[1] 32 512
my accuracy is only 84.5 for cross subject evaluation.

Official implementation of HCN

Hi All,

Thank @huguyuehuhu and other contributors for the excellent implementation of HCN. Here is good news! We finally released the official souce code of HCN at hikvision-research/skelact, along with some new works from our lab. The HCN model was originally implemented in TensorFlow and reimplemented in PyTorch. The performance of HCN matches with the paper (with slightly different implementation details). Please check the SkelAct repo, and we welcome contributions from the community.

RuntimeError: cuda runtime error (710) :

python main.py --dataset_dir --mode train --model_name HCN --dataset_name NTU-RGB-D-CV --num 01
I got RuntimeError: cuda runtime error (710) during training.
PyTorch version=1.4.0

2020-02-15 23:45:01,865:INFO: seed: 0
2020-02-15 23:45:01,865:INFO: batch_size: 64
2020-02-15 23:45:01,866:INFO: scheduler_gamma3: 0.5
2020-02-15 23:45:01,866:INFO: lr_step: [100, 160, 200]
2020-02-15 23:45:01,866:INFO: gpu_id: 0
2020-02-15 23:45:01,866:INFO: lr_decay_type: exp
2020-02-15 23:45:01,867:INFO: patience: 20
2020-02-15 23:45:01,867:INFO: test_feeder_args: {'window_size': 32, 'normalization': False, 'random_valid_choose': False, 'random_shift': False, 'p_interval': [0.95], 'origin_transfer': 0, 'debug': False, 'data_path': None, 'random_move': False, 'crop_resize': True, 'label_path': None}
2020-02-15 23:45:01,867:INFO: model_version: HCN
2020-02-15 23:45:01,867:INFO: dataset_name: NTU-RGB-D-CV
2020-02-15 23:45:01,868:INFO: data_parallel: False
2020-02-15 23:45:01,868:INFO: optimizer: Adam
2020-02-15 23:45:01,868:INFO: restore_file: None
2020-02-15 23:45:01,868:INFO: model_args: {'window_size': 32, 'num_class': 60, 'num_person': 2, 'num_joint': 25, 'in_channel': 3, 'out_channel': 64}
2020-02-15 23:45:01,869:INFO: loss_args: {'type': 'CE'}
2020-02-15 23:45:01,869:INFO: scheduler_gamma: 0.1
2020-02-15 23:45:01,869:INFO: dataset_dir: /home/ashish/Documents/BTP/HCN-pytorch-master/feeder/data/
2020-02-15 23:45:01,869:INFO: start_epoch: 0
2020-02-15 23:45:01,869:INFO: num_epochs: 400
2020-02-15 23:45:01,870:INFO: weight_decay: 0.0001
2020-02-15 23:45:01,870:INFO: lr: 0.001
2020-02-15 23:45:01,870:INFO: num_workers: 4
2020-02-15 23:45:01,870:INFO: clip: 0.5
weight initial finished!
2020-02-15 23:45:03,664:INFO: HCN(
(conv1): Sequential(
(0): Conv2d(3, 64, kernel_size=(1, 1), stride=(1, 1))
(1): ReLU()
)
(conv2): Conv2d(64, 32, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0))
(conv3): Sequential(
(0): Conv2d(25, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(conv4): Sequential(
(0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): Dropout2d(p=0.5, inplace=False)
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(conv1m): Sequential(
(0): Conv2d(3, 64, kernel_size=(1, 1), stride=(1, 1))
(1): ReLU()
)
(conv2m): Conv2d(64, 32, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0))
(conv3m): Sequential(
(0): Conv2d(25, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(conv4m): Sequential(
(0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): Dropout2d(p=0.5, inplace=False)
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(conv5): Sequential(
(0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Dropout2d(p=0.5, inplace=False)
(3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(conv6): Sequential(
(0): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Dropout2d(p=0.5, inplace=False)
(3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(fc7): Sequential(
(0): Linear(in_features=1024, out_features=512, bias=True)
(1): ReLU()
(2): Dropout2d(p=0.5, inplace=False)
)
(fc8): Linear(in_features=512, out_features=60, bias=True)
)
2020-02-15 23:45:03,665:INFO: Loading the datasets...
2020-02-15 23:45:03,945:INFO: - done.
2020-02-15 23:45:03,945:INFO: Starting training for 400 epoch(s)
2020-02-15 23:45:03,945:INFO: lr decay:exp
/home/ashish/anaconda3/lib/python3.5/site-packages/torch/optim/lr_scheduler.py:122: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
2020-02-15 23:45:03,946:INFO: Epoch 1/400
0%| | 0/1 [00:00<?, ?it/s]/home/ashish/anaconda3/lib/python3.5/site-packages/torch/nn/functional.py:2416: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
/home/ashish/anaconda3/lib/python3.5/site-packages/torch/nn/functional.py:2416: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [0,0,0] Assertion t >= 0 && t < n_classes failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [1,0,0] Assertion t >= 0 && t < n_classes failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [2,0,0] Assertion t >= 0 && t < n_classes failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [3,0,0] Assertion t >= 0 && t < n_classes failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [4,0,0] Assertion t >= 0 && t < n_classes failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [5,0,0] Assertion t >= 0 && t < n_classes failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion t >= 0 && t < n_classes failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [7,0,0] Assertion t >= 0 && t < n_classes failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [8,0,0] Assertion t >= 0 && t < n_classes failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [9,0,0] Assertion t >= 0 && t < n_classes failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [10,0,0] Assertion t >= 0 && t < n_classes failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [11,0,0] Assertion t >= 0 && t < n_classes failed.
THCudaCheck FAIL file=/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu line=110 error=710 : device-side assert triggered
0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "main.py", line 522, in
args.model_dir,logger, params.restore_file)
File "main.py", line 319, in train_and_evaluate
train_metrics,train_confusion_meter = train(model, optimizer, loss_fn, train_dataloader, metrics, params,logger)
File "main.py", line 87, in train
loss_bag = loss_fn(output_batch,labels_batch,current_epoch=params.current_epoch, params=params)
File "/home/ashish/Documents/BTP/HCN-pytorch-master/model/HCN.py", line 151, in loss_fn
CE = nn.CrossEntropyLoss()(outputs, labels)
File "/home/ashish/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/ashish/anaconda3/lib/python3.5/site-packages/torch/nn/modules/loss.py", line 916, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/home/ashish/anaconda3/lib/python3.5/site-packages/torch/nn/functional.py", line 2021, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/home/ashish/anaconda3/lib/python3.5/site-packages/torch/nn/functional.py", line 1838, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:110

CS CV reversed

The 'Cross view' and 'Cross subject' in the result table are reversed

about accuracy

Hello, I use your code to run the results with 70 and 80 accuracy. Hcn01 configuration is used. What configuration do you use to get the best result?

How to deal with absent data

Hello, i have a question:

If my data are absent,(for example, the skeletons for some frames are absent or some joints are absent in a frame ), how to deal with these cases.

Thanks!

fps?

thanks for your great work!
Please tell me the running time of this algorithm?thanks

Higher test set accuracy?

Thank you for this nice repo and congrats. for your nice results. I have some questions about your repo.

As stated in #2, the test set accuracy is significantly higher than training accuracy during nearly first 100 epochs. Is it because of dropouts or any other reason?

Also, my second question is: are there any difference between eval accuracy during training mode and accuracy stated in --mode test? They are the same right?

My other question is: what is the difference between HCN[1] and HCN. Aren't they the same network with same configuration?

HCN[1] 32 256 86.5 91.1
HCN 32 256 84.2 89.2

Edit: I understand the first two questions, so I disabled them.

2D-skeleton data format

hi,Can i use my own 2D skeleton dataset for this code? Should I make my dataset as .npy and .pkl?thank you very much.

Support of SBU Interaction Dataset

Hello there,

we're trying to recover the performance you got on the SBU interaction dataset. Is there a way to make the current version support the SBU dataset?

the download of dataset

ntu
I have a simple problem about dataset download.
the total dataset is 1.3TB, it means we need download all the datasets ? or only part of them ,such as 3D skeletons(5.8GB)
Thanks!

OSError: [Errno 12] Cannot allocate memory

Hello,
I am getting Cannot allocate memory error,I don't understand why this error occurs. Is this related to my server memory configuration?
Traceback (most recent call last):
File "main.py", line 524, in
loss_fn, metrics, params, args.model_dir,logger, params.restore_file)
File "main.py", line 402, in test_only
train_metrics, train_confusion_meter = evaluate(model, loss_fn, train_dataloader, metrics, params, logger)
File "main.py", line 161, in evaluate
for data_batch, labels_batch in dataloader:
File "/home/messor/.pyenv/versions/shinerio-python3.6.4-env/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 501, in iter
return _DataLoaderIter(self)
File "/home/messor/.pyenv/versions/shinerio-python3.6.4-env/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 289, in init
w.start()
File "/home/messor/.pyenv/versions/3.6.4/lib/python3.6/multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "/home/messor/.pyenv/versions/3.6.4/lib/python3.6/multiprocessing/context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/home/messor/.pyenv/versions/3.6.4/lib/python3.6/multiprocessing/context.py", line 277, in _Popen
return Popen(process_obj)
File "/home/messor/.pyenv/versions/3.6.4/lib/python3.6/multiprocessing/popen_fork.py", line 26, in init
self._launch(process_obj)
File "/home/messor/.pyenv/versions/3.6.4/lib/python3.6/multiprocessing/popen_fork.py", line 73, in _launch
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory

Please suggest what I could do to avoid this issue.
Thank You!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.