Giter VIP home page Giter VIP logo

tfusion's Introduction

TFusion

CVPR2018: Unsupervised Cross-dataset Person Re-identification by Transfer Learning of Spatio-temporal Patterns

TFusion架构

  • We present a novel method to learn pedestrians' spatio-temporal patterns in unlabeled target datsets by transferring the visual classifier from the source dataset. The algorithm does not require any prior knowledge about the spatial distribution of cameras nor any assumption about how people move in the target environment.

  • We propose a Bayesian fusion model, which combines the spatio-temporal patterns learned and the visual features to achieve high performance of person Re-ID in the unlabeled target datasets.

  • We propose a learning-to-rank based mutual promotion procedure, which uses the fusion classifier to teach the weaker visual classifier by the ranking results on unlabeled dataset. This mutual learning mechanism can be applied to many domain adaptation problems.

This code is ONLY released for academic use.

How to use

We split TFusion into two components:

  • rank-reid
    • Framework: Keras and Tensorflow
    • Training Resnet based Siamese network on source dataset
    • Learning to rank on target dataset
  • TrackViz
    • Dependencies: Some traditional libraries, including numpy, pickle, matplotlib, seaborn  - Building spatial temporal model with visual classification results
    • Bayesian Fusion

Components communicate by ranking results(those *.log files ). We use this results for visualization and logical analysis in our experiments, thus we save them on file system in TrackViz/data.

Written and tested in python2, keras2.1.5, tensorflow 1.4.

Attention: make sure you are using the repos specified in TFusion, corresponding to TrackViz@5a5c8a0 and rank-reid@b228897. You are possible to meet some errors if you use other version repos.

Dataset

Download

Pre-process

  • CUHK01

we only use CUHK01 as source dataset, so we use all images for pretrain, place all images in a directory.

  • VIPeR

the same as CUHK01.

  • GRID as Source dataset

we use all labeled images in GRID for pretraining as source dataset, so place all labeled images in a directory, for example "grid_label"

  • Market-1501

    • download
    • rename training directory to 'train', rename probe directory to 'probe', renmae gallery directory to 'test'
  • GRID as Target Dataset

    • follow dataset instruction, split the dataset to ten cross-validation sets
    • in each cross-validation set, rename training directory to 'train', rename probe directory to 'probe', renmae gallery directory to 'test'
    • you can also refer to 'TrackViz/data/grid' for more details about GRID cross validation.

Finally, your data will look like this:

Market-1501
├── probe
│   ├── 0003_c1s6_015971_00.jpg
│   ├── 0003_c3s3_064744_00.jpg
│   ├── 0003_c4s6_015641_00.jpg
│   ├── 0003_c5s3_065187_00.jpg
│   └── 0003_c6s3_088392_00.jpg
├── test
│   ├── 0003_c1s6_015971_02.jpg
│   ├── 0003_c1s6_015996_02.jpg
│   ├── 0003_c4s6_015716_03.jpg
│   ├── 0003_c5s3_065187_01.jpg
│   ├── 0003_c6s3_088392_04.jpg
│   └── 0003_c6s3_088442_04.jpg
└── train
    ├── 0002_c1s1_000451_03.jpg
    ├── 0002_c1s1_000551_01.jpg
    ├── 0002_c1s1_000776_01.jpg
    ├── 0002_c1s1_000801_01.jpg
    ├── 0002_c1s1_069056_02.jpg
    └── 0002_c6s1_073451_02.jpg
grid_train_probe_gallery
├── cross0
│   ├── probe
│   │   ├── 0002_1_25008_169_19_94_224.jpeg
│   │   ├── 0003_1_25008_57_44_97_265.jpeg
│   │   ├── 0004_1_25072_204_72_106_277.jpeg
│   │   └── 0005_1_25120_210_22_84_215.jpeg
│   ├── test
│   │   ├── 0000_1_25698_101_16_87_246.jpeg
│   │   ├── 0000_1_26113_116_13_72_212.jpeg
│   │   ├── 0000_1_26207_113_25_69_172.jpeg
│   │   └── gallery.txt
│   └── train
│       ├── 0001_1_25004_107_32_106_221.jpeg
│       ├── 0001_2_25023_116_134_128_330.jpeg
│       ├── 0009_1_25208_126_19_71_215.jpeg
│       ├── 0009_2_25226_176_72_87_246.jpeg
│       └── 0248_5_33193_101_100_90_308.jpeg
├── cross1
├── cross2
├── cross3
├── cross4
├── cross5
├── cross6
├── cross7
├── cross8
└── cross9

Place all datasets in the same directory, like this:

dataset
├── cuhk01
├── grid_train_probe_gallery
├── Market-1501
└── source

Configuration

  • Pretrain Config: Modify all path containing '/home/cwh' appearing in rank-reid/pretrain/pair_train.py to your corresponding path.
  • Fusion Config
    • Modify all path containing '/home/cwh' appearing in TrackViz/ctrl/transfer.py to your corresponding path.
    • Modify all path containing '/home/cwh' appearing in rank-reid/rank-reid.py to your corresponding path.

Pretrain

Pretrain Resnet52 and Siamese Network using source datasets.

cd rank-reid/pretrain && python pair_train.py

This code will save pretrained model in pair-train directory:

pretrain
├── cuhk_pair_pretrain.h5
├── cuhk_softmax_pretrain.h5
├── eval.py
├── grid-cv-0_pair_pretrain.h5
├── grid-cv-0_softmax_pretrain.h5
├── grid-cv-1_pair_pretrain.h5
├── grid-cv-1_softmax_pretrain.h5
├── grid-cv-2_pair_pretrain.h5
├── grid-cv-2_softmax_pretrain.h5
├── grid-cv-3_pair_pretrain.h5
├── grid-cv-3_softmax_pretrain.h5
├── grid-cv-4_pair_pretrain.h5
├── grid-cv-4_softmax_pretrain.h5
├── grid-cv-5_pair_pretrain.h5
├── grid-cv-5_softmax_pretrain.h5
├── grid-cv-6_pair_pretrain.h5
├── grid-cv-6_softmax_pretrain.h5
├── grid-cv-7_pair_pretrain.h5
├── grid-cv-7_softmax_pretrain.h5
├── grid-cv-8_pair_pretrain.h5
├── grid-cv-8_softmax_pretrain.h5
├── grid-cv-9_pair_pretrain.h5
├── grid-cv-9_softmax_pretrain.h5
├── grid_pair_pretrain.h5
├── grid_softmax_pretrain.h5
├── __init__.py
├── market_pair_pretrain.h5
├── market_softmax_pretrain.h5
├── pair_train.py
├── pair_transfer.py
├── source_pair_pretrain.h5
└── source_softmax_pretrain.h5

TFusion

include directly vision transfering, fusion, learning to rank

cd TrackViz && python ctrl/transfer.py

Results will be saved in TrackViz/data

TrackViz/data
├── source_target-r-test # transfer after learning to rank on test set
│   ├── cross_filter_pid.log
│   ├── cross_filter_score.log
│   ├── renew_ac.log
│   ├── renew_pid.log
│   └── sorted_deltas.pickle
├── source_target-r-train # transfer after learning to rank on training set
│   ├── cross_filter_pid.log
│   ├── cross_filter_score.log
│   ├── cross_mid_score.log
│   ├── renew_ac.log
│   ├── renew_pid.log
│   └── sorted_deltas.pickle
├── source_target-r-train_diff # ST model built by random classifier minus visual classfier after learning to rank
│   ├── renew_pid.log
│   └── sorted_deltas.pickle
├── source_target-r-train_rand  # ST model built by random classifier after learning to rank
│   ├── renew_pid.log
│   └── sorted_deltas.pickle
├── source_target-test # directly transfer from source to target test set
│   ├── cross_filter_pid_32.log
│   ├── cross_filter_pid.log
│   ├── cross_filter_score.log
│   ├── renew_ac.log
│   ├── renew_pid.log
│   └── sorted_deltas.pickle
├── source_target-train # directly transfer from source to  target training set
│   ├── cross_filter_pid.log # sorted pids by fusion scores
│   ├── cross_filter_score.log # sorted fusion scores corresponding to pids
│   ├── cross_mid_score.log # can be use to generate pseudo lable, ignore it 
│   ├── renew_ac.log #  sorted vision scores corresponding to pids
│   ├── renew_pid.log # sorted pids by vision scores
│   └── sorted_deltas.pickle # store time deltas, so called ST model built by visual classifier
├── source_target-train_diff # store time deltas, ST model built by random classifier minus visual classifier
│   ├── renew_pid.log
│   └── sorted_deltas.pickle
└── source_target-train_rand # store time deltas, built by random visual classifier
    ├── renew_pid.log
    └── sorted_deltas.pickle

Evaluation

Evaluation result will be automatically saved in the log_path, as you specified in rank-reid/rank-reid.py predict_eval(), default location is TrackViz/market_result_eval.log, TrackViz/grid_eval.log

  • GRID evaluation includes rank1, rank5, rank-10 accuracy
  • Market-1501 evaluation includes rank1 accuracy and mAP. Rank5 and rank10 should be computed by code in MATLAB provided by Liang Zheng.

Citation

Please cite this paper in your publications if it helps your research:

@inproceedings{DBLP:conf/cvpr/LvCLY18,
  author    = {Jianming Lv and
               Weihang Chen and
               Qing Li and
               Can Yang},
  title     = {Unsupervised Cross-Dataset Person Re-Identification by Transfer Learning
               of Spatial-Temporal Patterns},
  booktitle = {2018 {IEEE} Conference on Computer Vision and Pattern Recognition,
               {CVPR} 2018, Salt Lake City, UT, USA, June 18-22, 2018},
  pages     = {7948--7956},
  year      = {2018},
  crossref  = {DBLP:conf/cvpr/2018},
  url       = {http://openaccess.thecvf.com/content\_cvpr\_2018/html/Lv\_Unsupervised\_Cross-Dataset\_Person\_CVPR\_2018\_paper.html},
  doi       = {10.1109/CVPR.2018.00829},
  timestamp = {Mon, 07 Jan 2019 17:17:41 +0100},
  biburl    = {https://dblp.org/rec/bib/conf/cvpr/LvCLY18},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
@proceedings{DBLP:conf/cvpr/2018,
  title     = {2018 {IEEE} Conference on Computer Vision and Pattern Recognition,
               {CVPR} 2018, Salt Lake City, UT, USA, June 18-22, 2018},
  publisher = {{IEEE} Computer Society},
  year      = {2018},
  url       = {http://openaccess.thecvf.com/CVPR2018.py},
  timestamp = {Mon, 07 Jan 2019 12:43:48 +0100},
  biburl    = {https://dblp.org/rec/bib/conf/cvpr/2018},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

tfusion's People

Contributors

ahangchen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tfusion's Issues

When use TrackViz transfer.py get IndexError

Traceback (most recent call last):
File "ctrl/transfer.py", line 133, in
dataset_fusion_transfer()
File "ctrl/transfer.py", line 126, in dataset_fusion_transfer
fusion_transfer(source, 'grid-cv%d' % i)
File "ctrl/transfer.py", line 99, in fusion_transfer
fusion_test_rank_pids_path, fusion_test_rank_scores_path = st_fusion(source, target)
File "ctrl/transfer.py", line 48, in st_fusion
init_strict_img_st_fusion()
File "/home/msc/Github/person_reid/TFusion/TrackViz/ctrl/img_st_fusion.py", line 50, in init_strict_img_st_fusion
get_predict_delta_tracks(fusion_param)
File "./train/st_estim.py", line 91, in get_predict_delta_tracks
if real_tracks[i][3] == real_tracks[predict_pid][3] and real_tracks[i][1] != real_tracks[predict_pid][1]:
IndexError: list index out of range

#新手学习,请多多指教#

 在下载源码复现您实验时,发现没找到'_softmax_pretrain.h5'  这个文件,如果有知道的朋友望告诉一声,本人非常感谢!

FAQ:about spatial-temporal model

Q: TFusion代码的rank-reid部分由于函数调用较为复杂,使得我对于文章中P(ci,cj,Δij|Si||-Sj)的计算过程不是很明白。
具体来说,rank-reid部分st_estim.py中61行提到的fusion_param['renew_pid_path']我一直没找到具体的值,也一直不知道其具体含义,这使得我在理解融合模型三个具体的概率计算上出了问题。您是否愿意仔细介绍一下P(ci,cj,Δij|Si||-Sj)、P(ci,cj,Δij)这两个概率的计算过程(最好能举例)。就Market数据集来讲,计算P(ci,cj,Δij|Si||-Sj)需要视觉分类器判断两个模型包含同一个人,那么计算它是否面临较高的复杂度?

A:

  • renew_pid_path是按照图像相似度排序得到的的person id,比如Market1501训练集中这个文件是一个12936x12936的矩阵,将这个矩阵存储在renew_pid_path指示的路径下
  • Si=Sj这个条件是由这行代码控制的,只统计相似度top10的样本的时空分布
  • st_estim.py只是计算了deltas并存储下来,具体的概率计算是在track_prob.py这个文件中的track_score,是一个极大似然估计,调用是在st_filter.py中的*_track_score系列函数中
  • 关于“视觉分类器是否包含同一个人”的复杂度,由于我们需要对图像模型本身做精度评估,本身就需要计算所有图片之间的图像相似度,而融合只是将这个相似度拿过来利用而已,并没有增加额外的计算量。
    另外,图像相似度计算可以用GPU加速,速度很快。

About the spatial-temporal patterns

Hi, @ahangchen , I read carefully your paper, it seems that the spatial-temporal features could extract the cross datasets features. However, the distribution of the time interval is specified to the cameras views.
How it solves the cross cameras and cross datasets problems?

Thank you

中文讨论请到这里

作者您好,viper-market,我得到的结果是,视觉分类rank1:0.256,融合rank:0.565,但是一次迭代后,反而只有分类器rank1:0.011,融合rank1:0.112,不知道有可能在哪里出了错,会有这样的结果?

transfer.py 运行有错,不太懂

作者你好 ,在Python transfer.py 选用的是source 是Grid target是market 只在了train融合了,test融合出错了,
image
这个是什么原因啊?还有这张图片是运行Transfer 一次生成的嘛?
image
我只生成了关于train 的融合 ,其他部分的融合没有生成

M1,M2,M3

@ahangchen ,你好!陈同学,我想请问一下你关于论文里M1\M2\M3是怎么计算的,具体在代码那里实现的。顺便想请问一下Spatio-temporal pattern in the ‘GRID’ dataset和Spatio-temporal pattern in the ‘Market1501’ dataset.曲线是怎么画出来的,具体代码在那里实现的。冒昧打扰之处,还请见谅!望能解答,谢谢!

FAQ: about baseline

Q: (1). In Table 4 of your paper, the last row (TFusion-sup) shows rank-1 accuracy is 73.13%. And my question is:

Your paper adopts DLCE as the supervised learning algorithm, and DLCE achieves rank-1 accuracy 79.51%. Can I say your method degrades performance of supervised learning method, or your method is more suitable for cross datasets scenario? It would be great if you give more details about this.

(Z. Zheng, L. Zheng, and Y. Yang. A discriminatively learned cnn embedding for person re-identification. TOMM, 2017)

A: We implement DLCE in Keras and can't reach 79.51% as they reported, only 75%. Even if we use their MATLAB source code, we can only reach 77% rank-1 accuracy.
In Table4, TFusion-sup rank-1 accuracy is 73.13% because when the vision classifier is very strong, much more powerful than the spatial-temporal model, fusion model will be a little weaker than the vision classifier.
Therefore, our method is more suitable when visual classifier is weak, including cross dataset scenario and some visual-hard scenario like GRID.

grid作为source ,market为target
fusion_param:'data_folder_path': 'grid_market-train'
python transfer的时候 能够生成grid_market_train 直接迁移的log和融合后的log,
但是只生成grid_market_test的直接迁移没有生成融合的log 是为什么啊 不是说可以同时在训练集和测试集上做出融合
image

Number of people

Hi, assuming that the top1 accuracy of the fusion model is 100%, how do i know how many people in the Unlabeled target test set?

question

default
i DON'T know this directory how to get .could you help me ?

market-grid视觉迁移

作者您好,我在复现source为market1501数据集,target为grid数据集的视觉迁移的代码,发现一个bug。在rank-reid/rank-reid.py中的第29行是不是target_gallery_path = target_dataset_path + '/gallery' 结果才正确的。希望作者能够核对下,谢谢。

.log 相关问题

作者你好,我在运行 transfer.py 的时候出现了如下错误:
Traceback (most recent call last):
File "transfer.py", line 131, in
dataset_fusion_transfer()
File "transfer.py", line 124, in dataset_fusion_transfer
fusion_transfer(source, 'grid-cv%d' % i)
File "transfer.py", line 96, in fusion_transfer
fusion_test_rank_pids_path, fusion_test_rank_scores_path = st_fusion(source, target)
File "transfer.py", line 45, in st_fusion
init_strict_img_st_fusion()
File "/home/cv2018/Desktop/TFusion-master/TrackViz/ctrl/img_st_fusion.py", line 48, in init_strict_img_st_fusion
get_predict_delta_tracks(fusion_param)
File "/home/cv2018/Desktop/TFusion-master/TrackViz/train/st_estim.py", line 57, in get_predict_delta_tracks
predict_lines = read_lines(renew_pid_path)
File "/home/cv2018/Desktop/TFusion-master/TrackViz/util/file_helper.py", line 15, in read_lines
with open(path) as f:
IOError: [Errno 2] No such file or directory: '/home/cv2018/Desktop/TFusion-master/TrackViz/data/market_market-test/renew_pid.log'
请问 renew_pid.log 是要自己手动导入吗?求解答!

论文附录中的证明部分

看了您的文章,有一些地方不太明白。。
1.公式26是怎样得到的,并且为什么将公示23和24划等号,一个反应图像模型的概率,一个反应混合模型的概率,这两个模型是一个相互促进的关系,为什么可以划等号。
2.后续的求梯度的原因是什么

关于 No such layer: avg_pool

作者您好,当我运行pair_train.py 时,提示我 No such layer: avg_pool,请问我该怎么解决?

root@9dde0b706344:/home/hyl/TFusion/rank-reid/pretrain# python pair_train.py
/usr/local/lib/python2.7/dist-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Traceback (most recent call last):
File "pair_train.py", line 228, in
pair_pretrain_on_dataset(source)
File "pair_train.py", line 219, in pair_pretrain_on_dataset
batch_size=batch_size, num_classes=class_count
File "pair_train.py", line 155, in pair_tune
model = pair_model(source_model_path, num_classes)
File "pair_train.py", line 123, in pair_model
base_model = Model(inputs=softmax_model.input, outputs=[softmax_model.get_layer('avg_pool').output], name='resnet50')
File "/usr/local/lib/python2.7/dist-packages/keras/engine/network.py", line 365, in get_layer
raise ValueError('No such layer: ' + name)
ValueError: No such layer: avg_pool

test生成文件不全

作者你好
image
image
为什么我这两个生成不了融合后的结果?
还有的是
image这个如何修改对融合后的进行评估?

关于.log 文件生成的问题

您好,请问这些.log文件是运行哪些程序生成的呢?cutter_2018-07-18 11_43_07 553
另外,要运行的文件不少,能不能麻烦您按下面的图上的序号排一下运行的先后顺序?
tfusion
十分感谢!!!

运行pair_train.py文件失败

运行完train.py文件之后得到.h5文件,运行pair_train.py文件时,出现以下错误,试了很多种方法,无果!求大佬们帮忙看一下!
**`Traceback (most recent call last):
File "/home/chh/Code/TFusion-master/rank-reid/pretrain/pair_train.py", line 236, in
pair_pretrain_on_dataset(source)
File "/home/chh/Code/TFusion-master/rank-reid/pretrain/pair_train.py", line 224, in pair_pretrain_on_dataset
batch_size=batch_size, num_classes=class_count
File "/home/chh/Code/TFusion-master/rank-reid/pretrain/pair_train.py", line 153, in pair_tune
model = pair_model(source_model_path, num_classes)
File "/home/chh/Code/TFusion-master/rank-reid/pretrain/pair_train.py", line 124, in pair_model
feature1 = Flatten()(base_model(img1))
File "/home/chh/anaconda3/lib/python3.6/site-packages/keras/engine/topology.py", line 619, in call
output = self.call(inputs, **kwargs)
File "/home/chh/anaconda3/lib/python3.6/site-packages/keras/engine/topology.py", line 2085, in call
output_tensors, _, _ = self.run_internal_graph(inputs, masks)
File "/home/chh/anaconda3/lib/python3.6/site-packages/keras/engine/topology.py", line 2236, in run_internal_graph
output_tensors = _to_list(layer.call(computed_tensor, **kwargs))
File "/home/chh/anaconda3/lib/python3.6/site-packages/keras/layers/normalization.py", line 193, in call
self.momentum),
File "/home/chh/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 1005, in moving_average_update
x, value, momentum, zero_debias=True)
File "/home/chh/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/moving_averages.py", line 71, in assign_moving_average
update_delta = _zero_debias(variable, value, decay)
File "/home/chh/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/moving_averages.py", line 181, in _zero_debias
biased_var = variable_scope.get_variable("biased", initializer=biased_initializer, trainable=False)
File "/home/chh/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 988, in get_variable
custom_getter=custom_getter)
File "/home/chh/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 890, in get_variable
custom_getter=custom_getter)
File "/home/chh/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 348, in get_variable
validate_shape=validate_shape)
File "/home/chh/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 333, in _true_getter
caching_device=caching_device, validate_shape=validate_shape)
File "/home/chh/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 639, in _get_single_variable
name, "".join(traceback.format_list(tb))))
ValueError: Variable bn_conv1/moving_mean/biased already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:

File "/home/chh/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 1005, in moving_average_update
x, value, momentum, zero_debias=True)
File "/home/chh/anaconda3/lib/python3.6/site-packages/keras/layers/normalization.py", line 193, in call
self.momentum),
File "/home/chh/anaconda3/lib/python3.6/site-packages/keras/engine/topology.py", line 619, in call
output = self.call(inputs, kwargs)`

FAQ: about Dataset

Q: 在grid交叉验证分组具体是如何分组的呢,是需要自己人为的分组么还是需要其他什么操作?

A: 参考GRID官网说明进行划分,我划分好的结果在这个目录

Q: 请问一下论文发表以后你们有做跟DukeMTMC-ReID数据集相关的实验吗?
A: 恩,正在做,提升程度跟market1501差不多

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.