Giter VIP home page Giter VIP logo

mtcnn's Introduction

Introduction

this repository is the implementation of MTCNN in MXnet

  • core: core routines for MTCNN training and testing.
  • tools: utilities for training and testing
  • data: Refer to Data Folder Structure for dataset reference. Usually dataset contains images and imglists
  • model: Folder to save training symbol and model
  • prepare_data: scripts for generating training data for pnet, rnet and onet

Useful information

You're required to modify mxnet/src/regression_output-inl.h according to mxnet_diff.patch before using the code for training.

  • Dataset format The images used for training are stored in ./data/dataset_name/images/ The annotation file is placed in ./data/dataset_name/imglists/

    • For training: Each line of the annotation file states a training sample.
      The format is: [path to image] [cls_label] [bbox_label]
      cls_label: 1 for positive, 0 for negative, -1 for part face.
      bbox_label are the offset of x1, y1, x2, y2, calculated by (xgt(ygt) - x(y)) / width(height)
      An example would be 12/positive/28 1 -0.05 0.11 -0.05 -0.11.
      Note that all the strings are seperated by space.

    • For testing: Similar to training but only path-to-image is needed.

  • Data Folder Structure (suppose root is data)

cache (created by imdb)
-- name + image set + gt_roidb
-- results (created by detection and evaluation)
mtcnn # contains images and anno for training mtcnn
-- images
---- 12 (images of size 12 x 12, used by pnet)
---- 24 (images of size 24 x 24, used by rnet)
---- 48 (images of size 48 x 48, used by onet)
-- imglists 
---- train_12.txt
---- train_24.txt
---- train_48.txt
custom (datasets for testing) 
-- images
-- imglists
---- image_set.txt
  • Scripts to generate training data(from wider face dataset)
    • run wider_annotations/transform.m (or transform.py) to get the annotation file of the format we need.
    • gen_pnet_data.py: obtain training samples for pnet
    • gen_hard_example.py: prepare hard examples. you can set test_mode to "pnet" to get training data for rnet, or set test_mode to "rnet" to get training data for onet.
    • gen_imglist.py: ramdom sample images generated by gen_pnet_data.py or gen_hard_example.py to form training set.

Results

image

License

MIT LICENSE

Reference

Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, Yu Qiao , " Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks," IEEE Signal Processing Letter

mtcnn's People

Contributors

kuaikuaikim avatar seanlinx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mtcnn's Issues

Model parameters

Please, could you say what training parameters have you used for training? I need it, because i need to train all 3 networks from scratch.

Where is input size of Pnet specified?

Hi Seanlinx,

In the testing phase, I realize that PNet input size is not specified, but only the network model weights in args:

PNet = FcnDetector(P_Net("test"), ctx, args, auxs)

However given a testing image, the code can compute exactly classification score map (cls_map) size without knowing that PNet input size is 12x12x3. Could you please point out where this input size is indicated? Thanks.

Ratio of sample

The ratio of pos:part:neg that you used is about 1:1:2, why? The paper said the ratio is 1:1:3

image annotations flip?

@Seanlinx
152 line in imdb.py: m_bbox[0], m_bbox[2] = -m_bbox[2], -m_bbox[0]. It seems the box is fliped. does the flip is necessary when training model?

I get an error : "convolution4_weights is not in the arguments" when I load the new trained models

I finish the train and test process and get correct results before I change the network structure, but I get an error: " ValueError: Find name "convolution4_weights" that is not in the arguments " when I load the new models trained by the changed network structure .
I change the Pnet Rnet and Onet with same method, and locate the error at the RNet = Detector( R_Net("test"), 24, batch_size[1], ctx, args, auxs) in demo.py file --> executor.copy_params_from( self.arg_params, self.aux_params) in detector.py file.
I try to replace above sentence with executor.copy_params_from( self.arg_params, self.aux_params, allow_extra_params=True), and load the new models with no error informations ,but some mistakes show that the boxes_c is notype and can not operate copy manipulation at mtcnn_detector.detect_rnet(img,boxes_c) in demo.py file.
Is anybody encounter these error? I also want to know if the detector.py have to be changed according to the changed network structure in symbol.py?

No such file or directory:'../data/cache/mtcnn_pnet/train_12_gt_roidb.pkl'

Hello,I met a problem when I run "python train_P_net.py" with Windows, It print errors like that:

C:\Users\user\Desktop\V2_MXNet\example>python train_P_net.py
Called with argument:
Namespace(begin_epoch=0, dataset_path='../data/mtcnn', end_epoch=16, epoch=0, frequent=200, gpu_ids='0', image_set='pnet/train_12', lr=0.01, prefix='model/pnet', pretrained='model/pnet', resume=False, root_path='../data')
Traceback (most recent call last):
File "train_P_net.py", line 56, in
args.pretrained, args.epoch,args.begin_epoch, args.end_epoch, args.frequent, args.lr, args.resume)
File "train_P_net.py", line 14, in train_P_net
gt_imdb = imdb.gt_imdb()
File "..\core\imdb.py", line 72, in gt_imdb
with open(cache_file, 'wb') as f:
IOError: [Errno 2] No such file or directory: '../data\cache\mtcnn_pnet/train_12_gt_roidb.pkl'

It seems no train_12_gt_roidb.pkl,could you tell me how to generate the .pkl file? Thx

Debug info NaN at Train-LogLoss and Train_BBOX_MSE

Hi Seanlinx,

I followed your instruction to generate training data for the 3 CNN networks (P, R, O_Net) from WIDER_FACE training dataset. Everything seems to work well since no error appears.

However, when I use demo.py (with my own trained model) to run test on some photos, error appears:

Called with argument:
Namespace(batch_size=[2048, 256, 16], epoch=[16, 16, 16], gpu_id=-1, min_face=40, prefix=['model/pnet', 'model/rnet', 'model/onet'], slide_window=False, stride=2, thresh=[0.5, 0.5, 0.7])
/Users/dhuynh/Documents/TestCode/mtcnn/core/MtcnnDetector.py:357: RuntimeWarning: invalid value encountered in greater
  keep_inds = np.where(cls_scores > self.thresh[1])[0]
Traceback (most recent call last):
  File "demo.py", line 94, in <module>
    args.stride, args.slide_window)
  File "demo.py", line 48, in test_net
    boxes, boxes_c = mtcnn_detector.detect_onet(img, boxes_c)
  File "/Users/dhuynh/Documents/TestCode/mtcnn/core/MtcnnDetector.py", line 391, in detect_onet
    dets = self.convert_to_square(dets)
  File "/Users/dhuynh/Documents/TestCode/mtcnn/core/MtcnnDetector.py", line 47, in convert_to_square
    square_bbox = bbox.copy()
AttributeError: 'NoneType' object has no attribute 'copy'

I check debug info of training phase and find that Train-LogLoss=nan and Train_BBOX_MSE=nan all the time.

INFO:root:Epoch[2] Batch [1600] Speed: 4544.95 samples/sec      Train-Accuracy=0.813565
INFO:root:Epoch[2] Batch [1600] Speed: 4544.95 samples/sec      Train-LogLoss=nan
INFO:root:Epoch[2] Batch [1600] Speed: 4544.95 samples/sec      Train-BBOX_MSE=nan

Your trained model still works perfectly. So it seems that the error stems from my training phase, but I cannot figure out what I did wrong. Could you please help me out? Thanks.

detect speed

I found the speed of detection is about 1.0s/image (some images are 0.3s/images). Is it right?

run demo failed

    I try to run 'python demo.py' ,then the following error occured. The error happens when run line 'v.as_in_context(ctx)' in  function def convert_context(params, ctx):

error info:
/deep/work/incubator-mxnet/dmlc-core/include/dmlc/./logging.h:308: 14:40:34] /deep/work/incubator-mxnet/dmlc-core/include/dmlc/./logging.h:308: [14:40:34] /deep/work/incubator-mxnet/mshadow/mshadow/./tensor_gpu-inl.h:35: Check failed: e == cudaSuccess CUDA: unknown error

I am using mxnet version 0.11.1. what is problem about convert_context()?

training problem

I run your training code in Ubuntu via CPU, but I encounter a problem, the CPU usage is 0 by the procedure, how to solve the problem?

wider face annotations format (x1,y1, width, height)?

Hi,@Seanlinx, I'm new to your mtcnn-master. In your get_pnet_data.py , the format of wider face annotations is (x1,y1,x2,y2), but one of the contribitor of the WIDER FACE dataset , yangshuo, he told me the format of wider face annotations is (x1,y1, width, height)
11

Derivation of bbox_label

Hey @Seanlinx.
Just want to say that i found a link to this repo on this page

I'm interested in training a model for a custom dataset (not the faces), but i'm not sure what you meant by:
bbox_label are the offset of x1, y1, x2, y2, calculated by (xgt(ygt) - x(y)) / width(height)

I'm wondering what xgt, ygt, which width, which height in this case represent (I'm not familiar with the WIDER FACE dataset)

What i have are the coordinates for the (left upper corner) and also (lower right corner) of the bbox-rectangles.

Please help me make sense of xgt and ygt.

Also: i started off by installing mxnet 0.9.5 using pip.... yet somewhere you ask us to "modify mxnet/src/regression_output-inl.h according to mxnet_diff.patch before using the code for training."

Does this training require me to use your cloned version of the mxnet repo?

P_net only do not use negative dataset

When i read your code 'train_P_net.py', I found that you use function 'gt_imdb()'in function 'train_P_net'. This only returns positive + part face, but without negative dataset. However, in Pnet, it have to distinguish face or noface, neg+pos should be both in train data...

I am a bit confused, wish your early reply

Training Issue

Hello Lin

I have a couple of questions about training the network with data generated by gen_pnet_data.py

Noted that the data/mtcnn/imglists/train_12.txt mix both positive and negative images and their groundtruth, I am wondering how do you deal with negative bounding box, which is 0 ?

For example, the regression result is [0.1, 0.2, 0.3, 0.4] and negative bbx ground truth is [0]. Should I make it [0,0,0,0] ? Or should I training bbx only with positive and parts data ? However, since we are training the classification and bbx at the same time, I am guessing we should training all data at the same time, right ?

Best
HZ

"Cannot find argument 'out_grad'" when using train_P_net.py

Hi Seanlinx,

I run into this problem when trying to use train_P_.net

Called with argument:
Namespace(begin_epoch=0, dataset_path='data/mtcnn', end_epoch=16, epoch=0, frequent=200, gpu_ids='0', image_set='train_12', lr=0.01, prefix='model/pnet', pretrained='model/pnet', resume=False, root_path='data')
mtcnn_train_12 gt imdb loaded from data/cache/mtcnn_train_12_gt_roidb.pkl
append flipped images to imdb 1545850
Traceback (most recent call last):
  File "train_P_net.py", line 54, in <module>
    args.begin_epoch, args.end_epoch, args.frequent, args.lr, args.resume)
  File "train_P_net.py", line 13, in train_P_net
    sym = P_Net()
  File "/home/dang/test/mtcnn/core/symbol.py", line 38, in P_Net
    grad_scale=1, out_grad=True, name="bbox_pred")
  File "mxnet/cython/symbol.pyx", line 151, in symbol._make_atomic_symbol_function.creator (mxnet/cython/symbol.cpp:3591)
  File "mxnet/cython/base.pyi", line 36, in symbol.CALL (mxnet/cython/symbol.cpp:1624)
mxnet.base.MXNetError: Cannot find argument 'out_grad', Possible Arguments:
----------------
grad_scale : float, optional, default=1
    Scale the gradient by a float factor
, in operator LinearRegressionOutput(name="", out_grad="True", grad_scale="1")

I modified regression_output-inl.h according to mxnet_diff.patch (git apply mxnet_diff.patch), but the issue still appears. Could you please help me out? Thanks.

Quick question regarding threshold and running on batch of images

  1. What are the recommended thresholds for nets in your framework?
    parser.add_argument('--thresh', dest='thresh', help='list of thresh for pnet, rnet, onet', nargs="+",default=[0.6, 0.7, 0.7], type=float)

  2. Does below argument sets the net to run on batch of images?

parser.add_argument('--batch_size', dest='batch_size', help='list of batch size used in prediction', nargs="+", default=[2048, 256, 16], type=int)

where block back propagate?

for both positive and negative images, we calculate cls_prob and bbox_pred loss, however, do not backward update the bbow_pred weigh for negative images by block back propagate.

I have read the source code, however I could not find where block back propagate.

Where can i find it?

Thanks~

Issues in making data for R_Net

Dear Lin,
Thank your for your great work. It is very helpful.

I have truble in preparing training data for R-Net training. the usage of gen_hard_example.py.

The code in line 153: imdb = IMDB("wider", image_set, root_path, dataset_path, 'test')
But there are no ground truth info for the WIDER test dataset. info in file anno.txt
So I changed to use 'train' dataset. This time, cuda out of memory issue occurred, after processed some images.

...
2200 images done
[11:29:17] /data/code/mxnet/dmlc-core/include/dmlc/./logging.h:235: [11:29:17] src/storage/./pooled_storage_manager.h:79: cudaMalloc failed: out of memory
Traceback (most recent call last):
File "/data/code/mtcnn/prepare_data/gen_hard_example.py", line 228, in
args.slide_window, args.shuffle, args.vis)
File "/data/code/mtcnn/prepare_data/gen_hard_example.py", line 165, in test_net
detections = mtcnn_detector.detect_face(imdb, test_data, vis=vis)
File "/data/code/mtcnn/core/MtcnnDetector.py", line 457, in detect_face
boxes, boxes_c = self.detect_pnet(im)
File "/data/code/mtcnn/core/MtcnnDetector.py", line 279, in detect_pnet
cls_map, reg = self.pnet_detector.predict(im_resized)
File "/data/code/mtcnn/core/fcn_detector.py", line 25, in predict
grad_req='null', aux_states=self.aux_params)
File "/opt/anaconda/lib/python2.7/site-packages/mxnet-0.7.0-py2.7.egg/mxnet/symbol.py", line 852, in bind
ctypes.byref(handle)))
File "/opt/anaconda/lib/python2.7/site-packages/mxnet-0.7.0-py2.7.egg/mxnet/base.py", line 77, in check_call
raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [11:29:17] src/storage/./pooled_storage_manager.h:79: cudaMalloc failed: out of memory

Am I miss something? Thank you.

load_annotations getting slower

The load_annotations method loads data from annotation file, and I find it that after appending around 380000+ imdb_ into imdb, it getting really slow. Did you ever meet with the same phenomenon before ?

dataset

thanks for share your code.
I have few questions for training the model.
I try to train P/R/O net with 100000 dataset which include pos, part and neg(2:3:6). However, the results seems bad. I just change the learning rate to 0.00001, other params are keeped. Is my dataset is too small for this problem? If it is possible, can you share your dataset?

Multi-GPU training

Hi Seanlinx,

Thank you for your wonderful work.
I got an error while trying to enable gpu training at

//=====
Traceback (most recent call last):
File "gen_hard_example.py", line 229, in
args.slide_window, args.shuffle, args.vis)
File "gen_hard_example.py", line 167, in test_net
detections = mtcnn_detector.detect_face(imdb, test_data, vis=vis)
File "/home/dang/test/mtcnn-train/core/MtcnnDetector.py", line 456, in detect_face
boxes, boxes_c = self.detect_pnet(im)
File "/home/dang/test/mtcnn-train/core/MtcnnDetector.py", line 278, in detect_pnet
cls_map, reg = self.pnet_detector.predict(im_resized)
File "/home/dang/test/mtcnn-train/core/fcn_detector.py", line 28, in predict
grad_req='null', aux_states=self.aux_params)
File "/usr/local/lib/python2.7/dist-packages/mxnet-0.9.2-py2.7-linux-x86_64.egg/mxnet/symbol.py", line 926, in bind
ctypes.byref(handle)))
File "/usr/local/lib/python2.7/dist-packages/mxnet-0.9.2-py2.7-linux-x86_64.egg/mxnet/base.py", line 75, in check_call
raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [10:57:05] src/executor/graph_executor.cc:240: Check failed: x.ctx() == default_ctx Input array is in cpu(0) while binding with ctx=gpu(0). All arguments must be in global context (gpu(0)) unless group2ctx is specified for cross-device graph.
//====

I guess the issue comes from fcn_detector.py
self.executor = self.symbol.bind(self.ctx, self.arg_params, args_grad=None,
grad_req='null', aux_states=self.aux_params)

and arg_params are all cpu(0):
//====
{'conv4_1_bias': <NDArray 2 @cpu(0)>, 'conv4_2_bias': <NDArray 4 @cpu(0)>, 'prelu1_gamma': <NDArray 10 @cpu(0)>, 'conv1_bias': <NDArray 10 @cpu(0)>, 'conv3_weight': <NDArray 32x16x3x3 @cpu(0)>, 'conv2_bias': <NDArray 16 @cpu(0)>, 'conv2_weight': <NDArray 16x10x3x3 @cpu(0)>, 'conv1_weight': <NDArray 10x3x3x3 @cpu(0)>, 'conv4_2_weight': <NDArray 4x32x1x1 @cpu(0)>, 'conv4_1_weight': <NDArray 2x32x1x1 @cpu(0)>, 'data': <NDArray 1x3x692x512 @gpu(0)>, 'conv3_bias': <NDArray 32 @cpu(0)>, 'prelu2_gamma': <NDArray 16 @cpu(0)>, 'prelu3_gamma': <NDArray 32 @cpu(0)>}
//====

However, I checked that the self.ctx is always gpu(0) throughout the code. Do you have any idea of how to convert data to gpu context instead of cpu? Thanks.

why convert_to_square(dets)?

in gen_hard_example, convert_to_square is used. Why convert to square is needed since bounding box is rectangle and pnet do not use this func?

How to compute the loss diff in negativemining op

@Seanlinx Hi , Seanlinx , I have some questions about your negativemining op . Theoretically , the loss of the CLS can be writen into 1(x) * log(x) * (-1/ohem_keep) , in which the x represents the tuple of cls_label and the softmax op's output ( x=(label , prob)) , the 1(x) represents indicator function ( 1 { . } ) , so the bottom diff is
1(x) * (1/ x) *(-1/ohem_keep) , but you only compute 1(x) * (-1/ohem_keep) . Meanwhile , the loss of the BBOX can be writen into ( x )^2 / valid_num , so the diff is x * 2 /valid_num , but you only compute 1 / valid_num . Can you show me your advice ?

gen_hard_example test_mode Pnet problem

@Seanlinx I just want to use gen_hard_example.py --test_mode pnet to get some train data for the next net Rnet. But the defections is empty because the size for Pnet is 12x12, and the min_face_size is 16 default. should i set the min_face_size to the 12?
current_scale = float(net_size) / self.min_face_size

anno.txt

Hi, dear writer:
Thank you for sharing the code.Recently , i'm studying face detection. Could you share me with /wider_annotations/anno.txt?
Thank you very much!

pred_delta = 10^25

Hi , Seanlinx:
when I train the P net , I find that since the second DataBatch as Input, the 4_2th conv op's output (pred_delta) are almost 10^25 , so the bbox_mse is over the float32 type . However , I could not go into any op to check the input data and output data , even the negativemining op whick is designed by yourself. I try to set a breakpoint in forward function def , but when I using pycharm to debug the train_P_net script , it dose not go into the breakpoint .
so , why doe the pred_delta matrix become so big and why cannot I step into the op'forward or backward have confused me for 2 weeks , I wanna some help

Operator _zeros cannot be run

I've encountered this problem with latest version of mxnet:

mxnet.base.MXNetError: [21:26:49] src/c_api/c_api_ndarray.cc:274:
Operator _zeros cannot be run; requires at least one of FCompute<xpu>, NDArrayFunction, FCreateOperator be registered

Do you have any idea how to solve it?
Thank you~

How can i train landmarks

@Seanlinx Sorry for trouble you. Do you have any suggestion about training the face landmarks. The wider dataset not contains the landmarks annotations, do you know some dataset with the landmarks. Can i implement the landmarks training on the same approach as the bounding boxes regression.

Training mtcnn in KITTI for vehicle detect

Hi, @Seanlinx I have trained mtcnn in KITTI for vehicle detect, I only use the samples contain some cars(only for car detect). I founed that the detction result is so bad. When I use the trained P_net to produce the R_net‘s trained samples, the pos: part: neg = 0.8 : 10: 20, the positive samples's percentage is not balanced, what's your advice to change it?

gen_pnet_data.py run issues

gen_pnet_data.py:
cropped_im = img[ny1 : ny1 + size, nx1 : nx1 + size, :]
TypeError: slice indices must be integers or None or have an index method.
ny1 is float type.
@Seanlinx

facial landmark regression

Hello@Seanlinx,According to issue I finished facial landmark based on your Onet, the location results is not very robust. It looks very well on some picture but sometimes the deviation is big. How to adjust the super parameters to optimize the result?
PS: I already try some methods such as : learning rate(adjust to 0.001)/dropout(0.25)、weight decay(more bigger)..

关于绘制pr曲线

你好!我想问下作者提供的pr图是怎么作出来的,我用o-net输出的bbox和score 拿到widerface 的eval-tools上运行,效果非常差……所以应该怎样选取bbox和score

ROC question

Hi:
I notice that you train the mtcnn and release model, can you tell me your roc?
Compared with the original model, how about performance of your model?

Thanks

context check error

When running demo.py, I got the following error. All the context are in GPU. How to resolve the issue? Thanks

Called with argument:
Namespace(batch_size=[2048, 256, 16], epoch=[16, 16, 16], gpu_id=0, min_face=40, prefix=['model/pnet', 'model/rnet', 'model/onet'], slide_window=False, stride=2, thresh=[0.5, 0.5, 0.7])

[14:18:51] /local/mnt/workspace/szhuo/mxnet/dmlc-core/include/dmlc/./logging.h:300: [14:18:51] src/executor/graph_executor.cc:240: Check failed: x.ctx() == default_ctx Input array is in cpu(0) while binding with ctx=gpu(0). All arguments must be in global context (gpu(0)) unless group2ctx is specified for cross-device graph.

keypoint regression

Hi:
Your model only detect face positon, there is no face landmark postion, can you tell me the accuracy of face landmark?
Thanks

Landmark detection training

I was looking at the code and couldn't find any mention to CelebA dataset used by the paper for landmark detection training. Is this being trained?

Where is detections.pkl?

gen_hard_example.py, boxes = cPickle.load(open(os.path.join(save_path, 'detections.pkl'), 'r')),where is the detections.pkl? Looking forward to your reply,thx!!

License

Hello @Seanlinx , good work with the implementation!
Could you please add information about the license? Is it the same as original MTCNN (MIT License)?

Augmentation of data

I noted that you did little effort on data augmentation, and this is the end of the training of pnet:

INFO:root:Epoch[15] Train-Accuracy=0.931125
INFO:root:Epoch[15] Train-LogLoss=0.206840
INFO:root:Epoch[15] Train-BBOX_MSE=0.015009
INFO:root:Epoch[15] Time cost=118.500
INFO:root:Saved checkpoint to "../model/pnet-0016.params"

Is this reproduction of your final experiment? And will this route achieve the final result of this graph?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.