Giter VIP home page Giter VIP logo

temporal-segment-networks's Introduction

Temporal Segment Networks (TSN)

We have released MMAction, a full-fledged action understanding toolbox based on PyTorch. It includes implementation for TSN as well as other STOA frameworks for various tasks. We highly recommend you switch to it. This repo will keep on being suppported for Caffe users.

This repository holds the codes and models for the papers

Temporal Segment Networks for Action Recognition in Videos, Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool, TPAMI, 2018.

[Arxiv Preprint]

Temporal Segment Networks: Towards Good Practices for Deep Action Recognition, Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool, ECCV 2016, Amsterdam, Netherlands.

[Arxiv Preprint]

News & Updates

Jul. 20, 2018 - For those having trouble building the TSN toolkit, we have provided a built docker image you can use. Download it from DockerHub. It contains OpenCV, Caffe, DenseFlow, and this codebase. All built and ready to use with NVIDIA-Docker

Sep. 8, 2017 - We released TSN models trained on the Kinetics dataset with 76.6% single model top-1 accuracy. Find the model weights and transfer learning experiment results on the website.

Aug 10, 2017 - An experimental pytorch implementation of TSN is released github

Nov. 5, 2016 - The project page for TSN is online. website

Sep. 14, 2016 - We fixed a legacy bug in Caffe. Some parameters in TSN training are affected. You are advised to update to the latest version.

Below is the guidance to reproduce the reported results and explore more.

Contents


Usage Guide

Prerequisites

[back to top]

There are a few dependencies to run the code. The major libraries we use are

The codebase is written in Python. We recommend the Anaconda Python distribution. Matlab scripts are provided for some critical steps like video-level testing.

The most straightforward method to install these libraries is to run the build-all.sh script.

Besides software, GPU(s) are required for optical flow extraction and model training. Our Caffe modification supports highly efficient parallel training. Just throw in as many GPUs as you like and enjoy.

Code & Data Preparation

Get the code

[back to top]

Use git to clone this repository and its submodules

git clone --recursive https://github.com/yjxiong/temporal-segment-networks

Then run the building scripts to build the libraries.

bash build_all.sh

It will build Caffe and dense_flow. Since we need OpenCV to have Video IO, which is absent in most default installations, it will also download and build a local installation of OpenCV and use its Python interfaces.

Note that to run training with multiple GPUs, one needs to enable MPI support of Caffe. To do this, run

MPI_PREFIX=<root path to openmpi installation> bash build_all.sh MPI_ON

Get the videos

[back to top]

We experimented on two mainstream action recognition datasets: UCF-101 and HMDB51. Videos can be downloaded directly from their websites. After download, please extract the videos from the rar archives.

  • UCF101: the ucf101 videos are archived in the downloaded file. Please use unrar x UCF101.rar to extract the videos.
  • HMDB51: the HMDB51 video archive has two-level of packaging. The following commands illustrate how to extract the videos.
mkdir rars && mkdir videos
unrar x hmdb51-org.rar rars/
for a in $(ls rars); do unrar x "rars/${a}" videos/; done;

Get trained models

[back to top]

We provided the trained model weights in Caffe style, consisting of specifications in Protobuf messages, and model weights. In the codebase we provide the model spec for UCF101 and HMDB51. The model weights can be downloaded by running the script

bash scripts/get_reference_models.sh

Extract Frames and Optical Flow Images

[back to top]

To run the training and testing, we need to decompose the video into frames. Also the temporal stream networks need optical flow or warped optical flow images for input.

These can be achieved with the script scripts/extract_optical_flow.sh. The script has three arguments

  • SRC_FOLDER points to the folder where you put the video dataset
  • OUT_FOLDER points to the root folder where the extracted frames and optical images will be put in
  • NUM_WORKER specifies the number of GPU to use in parallel for flow extraction, must be larger than 1

The command for running optical flow extraction is as follows

bash scripts/extract_optical_flow.sh SRC_FOLDER OUT_FOLDER NUM_WORKER

It will take from several hours to several days to extract optical flows for the whole datasets, depending on the number of GPUs.

Testing Provided Models

Get reference models

[back to top]

To help reproduce the results reported in the paper, we provide reference models trained by us for instant testing. Please use the following command to get the reference models.

bash scripts/get_reference_models.sh

Video-level testing

[back to top]

We provide a Python framework to run the testing. For the benchmark datasets, we will measure average accuracy on the testing splits. We also provide the facility to analyze a single video.

Generally, to test on the benchmark dataset, we can use the scripts eval_net.py and eval_scores.py.

For example, to test the reference rgb stream model on split 1 of ucf 101 with 4 GPUs, run

python tools/eval_net.py ucf101 1 rgb FRAME_PATH \
 models/ucf101/tsn_bn_inception_rgb_deploy.prototxt models/ucf101_split_1_tsn_rgb_reference_bn_inception.caffemodel \
 --num_worker 4 --save_scores SCORE_FILE

where FRAME_PATH is the path you extracted the frames of UCF-101 to and SCORE_FILE is the filename to store the extracted scores.

One can also use cached score files to evaluate the performance. To do this, issue the following command

python tools/eval_scores.py SCORE_FILE

The more important function of eval_scores.py is to do modality fusion. For example, once we got the scores of rgb stream in RGB_SCORE_FILE and flow stream in FLOW_SCORE_FILE. The fusion result with weights of 1:1.5 can be achieved with

python tools/eval_scores.py RGB_SCORE_FILE FLOW_SCORE_FILE --score_weights 1 1.5

To view the full help message of these scripts, run python eval_net.py -h or python eval_scores.py -h.

Training Temporal Segment Networks

[back to top]

Training TSN is straightforward. We have provided the necessary model specs, solver configs, and initialization models. To achieve optimal training speed, we strongly advise you to turn on the parallel training support in the Caffe toolbox using following build command

MPI_PREFIX=<root path to openmpi installation> bash build_all.sh MPI_ON

where root path to openmpi installation points to the installation of the OpenMPI, for example /usr/local/openmpi/.

Construct file lists for training and validation

[back to top]

The data feeding in training relies on VideoDataLayer in Caffe. This layer uses a list file to specify its data sources. Each line of the list file will contain a tuple of extracted video frame path, video frame number, and video groundtruth class. A list file looks like

video_frame_path 100 10
video_2_frame_path 150 31
...

To build the file lists for all 3 splits of the two benchmark dataset, we have provided a script. Just use the following command

bash scripts/build_file_list.sh ucf101 FRAME_PATH

and

bash scripts/build_file_list.sh hmdb51 FRAME_PATH

The generated list files will be put in data/ with names like ucf101_flow_val_split_2.txt.

Get initialization models

[back to top]

We have built the initialization model weights for both rgb and flow input. The flow initialization models implements the cross-modality training technique in the paper. To download the model weights, run

bash scripts/get_init_models.sh

Start training

[back to top]

Once all necessities ready, we can start training TSN. For this, use the script scripts/train_tsn.sh. For example, the following command runs training on UCF101 with rgb input

bash scripts/train_tsn.sh ucf101 rgb

the training will run with default settings on 4 GPUs. Usually, it takes around 1 hours to train the rgb model and 4 hours for flow models, on 4 GTX Titan X GPUs.

The learned model weights will be saved in models/. The aforementioned testing process can be used to evaluate them.

Config the training process

[back to top]

Here we provide some information on customizing the training process

  • Change split: By default, the training is conducted on split 1 of the datasets. To change the split, one can modify corresponding model specs and solver files. For example, to train on split 2 of UCF101 with rgb input, one can modify the file models/ucf101/tsn_bn_inception_rgb_train_val.prototxt. On line 8, change
source: "data/ucf101_rgb_train_split_1.txt"`

to

`source: "data/ucf101_rgb_train_split_2.txt"`

On line 34, change

source: "data/ucf101_rgb_val_split_1.txt"

to

source: "data/ucf101_rgb_val_split_2.txt"

Also, in the solver file models/ucf101/tsn_bn_inception_rgb_solver.prototxt, on line 12 change

snapshot_prefix: "models/ucf101_split1_tsn_rgb_bn_inception"

to

snapshot_prefix: "models/ucf101_split2_tsn_rgb_bn_inception"

in order to distiguish the learned weights.

  • Change GPU number, in general, one can use any number of GPU to do the training. To use more or less GPU, one can change the N_GPU in scripts/train_tsn.sh. Important notice: when the GPU number is changed, the effective batchsize is also changed. It's better to always make sure the effective batchsize, which equals to batch_size*iter_size*n_gpu, to be 128. Here, batch_size is the number in the model's prototxt, for example line 9 in models/ucf101/tsn_bn_inception_rgb_train_val.protoxt.

Other Info

[back to top]

Citation

Please cite the following paper if you feel this repository useful.

@inproceedings{TSN2016ECCV,
  author    = {Limin Wang and
               Yuanjun Xiong and
               Zhe Wang and
               Yu Qiao and
               Dahua Lin and
               Xiaoou Tang and
               Luc {Val Gool}},
  title     = {Temporal Segment Networks: Towards Good Practices for Deep Action Recognition},
  booktitle   = {ECCV},
  year      = {2016},
}

Related Projects

Contact

For any question, please contact

Yuanjun Xiong: [email protected]
Limin Wang: [email protected]

temporal-segment-networks's People

Contributors

fetorres avatar mshreve avatar rowhanm avatar surajkothawade avatar vra avatar yjxiong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

temporal-segment-networks's Issues

question about feature dimension from 'eval_net.py' script

I have seen you said you used 25 frames per video to calculate the test accuracy. However, after I made a little change of the eval_net.py script and extract global_pool features as you suggested, it generates 101024 dim features for each video which I thought should be 251024.

RGB difference

你好,看到你们的这篇论文,对于其中输入为stacked RGB difference ,是怎么得到的,能详细说一下嘛?十分感谢

train a TSN model with the ResNet50

Thanks for your share!
I want to train a TSN model with the ResNet50, but the log show that:

I0922 21:43:26.134378 35831 solver.cpp:75] Creating training net from net file: home/lk/ResNet/resnet_50/Train_val_resnet_50_ucf101.prototxt
[libprotobuf ERROR google/protobuf/text_format.cc:245] Error parsing text-format caffe.NetParameter: 68:26: Message type "caffe.LayerParameter" has no field named "batch_norm_param".

Then i change the "batch_norm_param" into "BN", then the log shows that:

I0922 22:27:14.272898 11935 caffe.cpp:86] Finetuning from /home/lk/ResNet/resnet_50/ResNet-50-model.caffemodel
F0922 22:27:16.068253 11935 net.cpp:834] Check failed: target_blobs.size() == source_layer.blobs_size() (4 vs. 3) Incompatible number of blobs for layer bn_conv1

It seems the official caffe parameter "batch_norm_param" is different from your own parameter "BN".Would you like to tell me how to fix it?

.caffemodel可视化

你好,请问如何可视化初始的.caffemodel?
例如bn_inception_flow_init.caffemodel

Help with Optical Flow

Hi,

I have looked at the optical flow extraction in the denseflow code and I see that TVL1 algorithm is used.
But Farneback is much better in performance compared to TVL1. Did you happen to run any tests to compare the accuracy in both the cases?

Thanks

batch_size

You say that the the effective batchsize equals to batch_sizeiter_sizen_gpu, to be 128. Your parameter is 32*1(iter_size)*4(GPU), i have to set it to 16(bath_size)*4(iter_size)*2(GPU) make batchsize to be 128 due to my poor server,what about other parameter?such as the stepvalue and max_iter, keep it still or multiple 4 because the iter_size become 4 rather than 1? Do i need to change other parameter? i do the work on the ucf101 spilt1.

i also have another question, you set the new_length to 5 in the tsn_bn_inception_flow_train_val.prototxt,
but the better parameter should be 10 according to the 2 stream paper(NIPS14).Is it a bug or 5 is better for TSN?
Thanks!

Is it possible to run multiple eval_net.py process with multiple workers?

Hi guys.

When running the eval_net.py on the UCF-101 split_1 with 4 workers, I noticed that all my GPUs are running, but the GPU usage doesn't reach to the maximum. So I tried to run another eval_net.py process to evaluate the UCF-101 split_2, but I encountered lots of error messages like the following:

Setting device 1517
WARNING: Logging before InitGoogleLogging() is written to STDERR
F0908 02:12:50.717952  3673 common.cpp:187] Check failed: error == cudaSuccess (10 vs. 0)  invalid device ordinal
*** Check failure stack trace: ***
Setting device 1518
WARNING: Logging before InitGoogleLogging() is written to STDERR
F0908 02:12:51.023628  3675 common.cpp:187] Check failed: error == cudaSuccess (10 vs. 0)  invalid device ordinal
*** Check failure stack trace: ***

I am not sure these errors are caused by my machine(maybe my command was wrong)
Or the design of TSN doesn't allow doing this?

Testing Single Video

Hi,

In the testing description for Temporal Segment Networks it is mentioned that there is a facility to analyze single video.

"For the benchmark datasets, we will measure average accuracy on the testing splits. We also provide the facility to analyze a single video."

Could you please tell me how I should go about analyzing a single video? Can I use the same eval_net.py script for this purpose?

Thanks!

Release Plan

To improve the ease of use for the community, let's plan on how we gonna organize the released codebase.

  • Preliminary Tools
    • Frame/optical flow extraction: already released in our dense_flow. Need to add instructions
    • Caffe: released with full usage guides
    • Scripts for building initialization models for the temporal stream
  • Training TSN
    • TSN training model spec in Caffe style based on BN-Inception
      • segment number: 3
      • aggregation func: avg
      • RGB
      • Flow
      • Warped flow
    • Initialization model weights based on BN-Inception
      • RGB
      • Flow
      • Warped flow
  • Testing TSN
    • standard deploy model spec in Caffe style based on BN-Inception
    • Trained model weights to reproduce reported accuracies
    • Evaluation framework:
      • Python: adapted from the anet repo.
      • Matlab: partially completed, will add whole dataset evaluation soon.
  • Guidance
    • Data preparation
    • Run evaluation with trained models
    • Train models

No space left on device while compiling OpenCV using the build script

[ 52%] Building CXX object apps/haartraining/CMakeFiles/opencv_haartraining_engine.dir/cvcommon.cpp.o
/usr/local/cuda/include/math_functions.h(303) (col. 90): catastrophic error: error while writing generated C file: No space left on device

I verified that I have enough space for my /tmp folder (several gigabytes). Any thoughts?

what is the best usage strategy with openmpi

Sorry to say that I have almost zero knowledge about openmpi.
I just know it is used for parallelization and acceleration and --num_worker is some kind of parameter for openmpi.

For example, I have 4 GPU 12G/per.
What is the best usage strategy for this? Or is there any tricks to try to find the best usage strategy?
The bigger, the better?

:-) . I just don't want to waste any of the machines and my time.

ImportError: No module named _caffe

Hello,

I am trying to run the eval_net.py example in the README description, but I get
"ImportError: No module named _caffe". I am running in an Anaconda environment. Everything seemed to install OK with no errors.

I run the command
python tools/eval_net.py ucf101 1 rgb UCF-101_frames/ models/ucf101/tsn_bn_inception_rgb_deploy.prototxt models/ucf101_split_1_tsn_rgb_reference_bn_inception.caffemodel --num_worker 4 --save_scores scoresOutput.txt

I get:
Traceback (most recent call last):
File "tools/eval_net.py", line 38, in
from pyActionRecog.action_caffe import CaffeNet
File "/data/torres/temporal-segment-networks/pyActionRecog/action_caffe.py", line 4, in
import caffe
File "./lib/caffe-action/python/caffe/init.py", line 1, in
from .pycaffe import Net, SGDSolver
File "./lib/caffe-action/python/caffe/pycaffe.py", line 13, in
from ._caffe import Net, SGDSolver
ImportError: No module named _caffe

I have tried to read through google posts for this error, but they all refer to the standard Caffe installation, including the "make pycaffe" part of it. Your installation is customized and therefore different :)

Do you have any suggestions for me?

Thank you,
Frank Torres
Palo Alto Research Center

mpirun training can't return

Hi, @yjxiong

When I use mpirun -np 2 to run the training with 2 GPUs, the program can't return. It stopped at Optimization Done like this ( I set the number of iteration to 20 for convenience ),

I1221 22:17:14.922746 24063 solver.cpp:631] Iteration 0, lr = 0.001
I1221 22:17:56.933382 24063 solver.cpp:502] Snapshotting to models/hmdb51_split_1_tsn_rgb_bn_inception_iter_20.caffemodel
I1221 22:17:57.243152 24063 solver.cpp:510] Snapshotting solver state to models/hmdb51_split_1_tsn_rgb_bn_inception_iter_20.solverstate
I1221 22:17:57.486786 24063 solver.cpp:406] Iteration 20, loss = 3.53084
I1221 22:17:57.486837 24063 solver.cpp:411] Optimization Done.
I1221 22:17:57.486843 24063 caffe.cpp:187] Optimization Done.

and never return to system. The training had been done, so i suspect it is the mpirun couldn't return. Then i run with --mca mpi_common_cuda_verbose 100 for additional information, and i get this at the end,

I1221 22:47:31.502188 27430 solver.cpp:406] Iteration 20, loss = 3.50854
I1221 22:47:31.502244 27430 solver.cpp:411] Optimization Done.
I1221 22:47:31.502249 27430 caffe.cpp:187] Optimization Done.
[UBUNTUB:27430] CUDA: mca_common_cuda_fini, never completed initialization so skipping fini, ref_count is now 2
[UBUNTUB:27431] CUDA: mca_common_cuda_fini, never completed initialization so skipping fini, ref_count is now 2
[UBUNTUB:27430] CUDA: mca_common_cuda_fini, never completed initialization so skipping fini, ref_count is now 1
[UBUNTUB:27431] CUDA: mca_common_cuda_fini, never completed initialization so skipping fini, ref_count is now 1
[UBUNTUB:27430] CUDA: mca_common_cuda_fini, never completed initialization so skipping fini, ref_count is now 0
[UBUNTUB:27431] CUDA: mca_common_cuda_fini, never completed initialization so skipping fini, ref_count is now 0

which i don't understand. And I have tried launching the training without mpirun ( i.e run caffe train --slover=... directly with one GPU ), the program returns normally,

I1221 22:55:10.266736 28827 solver.cpp:510] Snapshotting solver state to models/hmdb51_split_1_tsn_rgb_bn_inception_iter_20.solverstate
I1221 22:55:10.510454 28827 solver.cpp:406] Iteration 20, loss = 3.61723
I1221 22:55:10.510510 28827 solver.cpp:411] Optimization Done.
I1221 22:55:10.510516 28827 caffe.cpp:187] Optimization Done.
Setting device 0
cenjiepeng@UBUNTUB:~/temporal-segment-networks$

I run it on Ubuntu 16.04 + CUDA 8.0 + OpenMPI 1.8.8 and I have tried OpenMPI 1.8.7 and 1.10.2, but it didn't work. Have you ever encouter this problem?

train rgb modality fine, but flow modality not work

Hello, thanks for your work! I follow the guide in the README and train rgb modality in 2 hours, obtain 83%, it is work. But when i train the flow modality, the loss is like below:

I0901 00:04:04.431964 13999 solver.cpp:481] Test net output #0: accuracy = 0.00736842
I0901 00:04:04.432224 13999 solver.cpp:481] Test net output #1: loss = 86.6926 (* 1 = 86.6926 loss)
I0901 00:04:05.819684 13999 solver.cpp:240] Iteration 0, loss = 4.61512
I0901 00:04:05.819737 13999 solver.cpp:255] Train net output #0: loss = 4.61512 (* 1 = 4.61512 loss)
I0901 00:04:05.819764 13999 solver.cpp:631] Iteration 0, lr = 0.005
I0901 00:04:45.236163 13999 solver.cpp:240] Iteration 20, loss = 4.615
I0901 00:04:45.236323 13999 solver.cpp:255] Train net output #0: loss = 4.61502 (* 1 = 4.61502 loss)
I0901 00:04:45.236335 13999 solver.cpp:631] Iteration 20, lr = 0.005

I0901 00:05:17.419486 13999 solver.cpp:240] Iteration 40, loss = 4.61489
I0901 00:05:17.419651 13999 solver.cpp:255] Train net output #0: loss = 4.61508 (* 1 = 4.61508 loss)
I0901 00:05:17.419662 13999 solver.cpp:631] Iteration 40, lr = 0.005
I0901 00:05:49.311221 13999 solver.cpp:240] Iteration 60, loss = 4.61442
I0901 00:05:49.311381 13999 solver.cpp:255] Train net output #0: loss = 4.61358 (* 1 = 4.61358 loss)
I0901 00:05:49.311401 13999 solver.cpp:631] Iteration 60, lr = 0.005
I0901 00:06:21.456552 13999 solver.cpp:240] Iteration 80, loss = 4.61413
I0901 00:06:21.456756 13999 solver.cpp:255] Train net output #0: loss = 4.61389 (* 1 = 4.61389 loss)
I0901 00:06:21.456779 13999 solver.cpp:631] Iteration 80, lr = 0.005
I0901 00:06:53.565157 13999 solver.cpp:240] Iteration 100, loss = 4.61352
I0901 00:06:53.565326 13999 solver.cpp:255] Train net output #0: loss = 4.61341 (* 1 = 4.61341 loss)
I0901 00:06:53.565347 13999 solver.cpp:631] Iteration 100, lr = 0.005
I0901 00:07:25.516310 13999 solver.cpp:240] Iteration 120, loss = 4.61317
I0901 00:07:25.516471 13999 solver.cpp:255] Train net output #0: loss = 4.6121 (* 1 = 4.6121 loss)
I0901 00:07:25.516494 13999 solver.cpp:631] Iteration 120, lr = 0.005
I0901 00:07:57.866698 13999 solver.cpp:240] Iteration 140, loss = 4.61288
I0901 00:07:57.866843 13999 solver.cpp:255] Train net output #0: loss = 4.61269 (* 1 = 4.61269 loss)
I0901 00:07:57.866863 13999 solver.cpp:631] Iteration 140, lr = 0.005
I0901 00:08:31.625901 13999 solver.cpp:240] Iteration 160, loss = 4.61296
I0901 00:08:31.626073 13999 solver.cpp:255] Train net output #0: loss = 4.6131 (* 1 = 4.6131 loss)
I0901 00:08:31.626091 13999 solver.cpp:631] Iteration 160, lr = 0.005
I0901 00:09:06.457424 13999 solver.cpp:240] Iteration 180, loss = 4.61233
I0901 00:09:06.457566 13999 solver.cpp:255] Train net output #0: loss = 4.61277 (* 1 = 4.61277 loss)
I0901 00:09:06.457582 13999 solver.cpp:631] Iteration 180, lr = 0.005
I0901 00:09:38.840293 13999 solver.cpp:240] Iteration 200, loss = 4.61258
I0901 00:09:38.840457 13999 solver.cpp:255] Train net output #0: loss = 4.6116 (* 1 = 4.6116 loss)
I0901 00:09:38.840473 13999 solver.cpp:631] Iteration 200, lr = 0.005
I0901 00:10:13.054281 13999 solver.cpp:240] Iteration 220, loss = 4.61247
I0901 00:10:13.054402 13999 solver.cpp:255] Train net output #0: loss = 4.61382 (* 1 = 4.61382 loss)
I0901 00:10:13.054419 13999 solver.cpp:631] Iteration 220, lr = 0.005

I don't know the reason, by the way, can you share your nvidia driver version(mine is 352.39) because i met the problem of "loss not decrease" many time? can you share your basic configuration of cuda and nvidia or some other point?

Question about the accuracy reported

Hi,

In the paper I do not see results with varying number of segments. Could you please share the results with varying segment numbers if you have any?

Thanks,
Praneetha

DETAILS about using scripts/extract_optical_flow.sh

Thanks for your excellent work and code!
as above, when i using the dense flow to extract images and opt flow, the FATAL occure

2016-08-28 21:16:28,702 FATAL [default] Check failed: [video_stream.isOpened()] Cannot open video stream "true" for optical flow extraction.
2016-08-28 21:16:28,702 WARN [default] Aborting application. Reason: Fatal log at [/home/wanghao/my_project/temporal-segment-networks/lib/dense_flow/src/dense_flow_gpu.cpp:17]

my python is 2.7.6, i think maybe the reason of function pool.map, can you share some thoughts about it?

Thanks a lot!

RGB difference

Hello, @yjxiong @wanglimin

I noticed that you have experiment with adding stacked RGB difference as a input modality.

Can you tell me how to stacked RGB difference?
For new_length, modality, scale_ratios, what should I set?

Best regards

FATAL occured while opening videos in scripts/extract_optical_flow.sh

Hi guys :)

I encountered a similar problem to #6, the following are the error message.

2016-09-07 02:44:23,873 FATAL [default] Check failed: [video_stream.isOpened()] Cannot open video stream "/root/data/UCF-101/SalsaSpin/v_SalsaSpin_g22_c04.avi" for optical flow extraction.
2016-09-07 02:44:23,873 WARN [default] Aborting application. Reason: Fatal log at [/root/temporal-segment-networks/lib/dense_flow/src/dense_flow_gpu.cpp:15]

All the videos prompted the same error message.

The video path is correct, but it just can't open the video.
The openCV came from the TSN installation package, it should be okay.
Any suggestion to solve this problem?

Why my reference flow downloaded model doesn't produce the right result.

Hi:
When I test the reference model download from website a. The flow model produce very low accuracy.
Here is my config:

args.dataset='ucf101'
args.split=1
args.modality='flow'
args.frame_path='XXXXXXX/datas/UCF101/img_output'
args.net_proto='models/ucf101/tsn_bn_inception_flow_deploy.prototxt'
args.net_weights='models/ucf101_split_1_tsn_flow_reference_bn_inception.caffemodel'
args.save_scores='flow_score2'

It produced the Final accuracy 13.278916% on UCF101 split 1.
While the only-rgb model produced the Final accuracy 86.037706%.
Are there any mistake for my config? Or any other body get the same accuracy number as mine?
Seeking for help~~

Did you evaluate the performance of ResNet?

I notice that you compare the performance of different CNN architectures, among which BN-inception achieves best accuracy. Did you try the residual networks? Can it bring better performance with other setting consistent?
Thank you in advance.

--num_worker is the GPUs numbers?

python tools/eval_net.py ucf101 1 rgb /home/xs/deep-learning/dataset/UCF101_FRAMES/ models/ucf101/tsn_bn_inception_rgb_deploy.prototxt models/ucf101_split_1_tsn_rgb_reference_bn_inception.caffemodel --num_worker 2 --save_scores ucf101score

I have two K80 GPU, when I run the eval_net.py ,it warning like this:

I1221 23:06:41.762329 4542 net.cpp:294] Network initialization done.
I1221 23:06:41.762334 4542 net.cpp:295] Memory required for data: 74005652
Setting device 2
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1221 23:06:45.627977 4567 common.cpp:187] Check failed: error == cudaSuccess (10 vs. 0) invalid device ordinal
*** Check failure stack trace: ***
Setting device 3
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1221 23:06:45.829156 4572 common.cpp:187] Check failed: error == cudaSuccess (10 vs. 0) invalid device ordinal
*** Check failure stack trace: ***
Setting device 4
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1221 23:06:46.244467 4577 common.cpp:187] Check failed: error == cudaSuccess (10 vs. 0) invalid device ordinal
*** Check failure stack trace: ***
Setting device 5
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1221 23:06:46.447710 4580 common.cpp:187] Check failed: error == cudaSuccess (10 vs. 0) invalid device ordinal
*** Check failure stack trace: ***
Setting device 6
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1221 23:06:46.553652 4583 common.cpp:187] Check failed: error == cudaSuccess (10 vs. 0) invalid device ordinal
*** Check failure stack trace: ***
Setting device 7
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1221 23:06:46.754379 4588 common.cpp:187] Check failed: error == cudaSuccess (10 vs. 0) invalid device ordinal
*** Check failure stack trace: ***
Setting device 8
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1221 23:06:46.964293 4593 common.cpp:187] Check failed: error == cudaSuccess (10 vs. 0) invalid device ordinal
*** Check failure stack trace: ***
Setting device 9
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1221 23:06:47.169553 4596 common.cpp:187] Check failed: error == cudaSuccess (10 vs. 0) invalid device ordinal
*** Check failure stack trace: ***
Setting device 10
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1221 23:06:47.274942 4599 common.cpp:187] Check failed: error == cudaSuccess (10 vs. 0) invalid device ordinal
*** Check failure stack trace: ***
Setting device 11
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1221 23:06:47.480448 4604 common.cpp:187] Check failed: error == cudaSuccess (10 vs. 0) invalid device ordinal
*** Check failure stack trace: ***
Setting device 12
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1221 23:06:47.685133 4609 common.cpp:187] Check failed: error == cudaSuccess (10 vs. 0) invalid device ordinal
*** Check failure stack trace: ***
Setting device 13
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1221 23:06:47.993522 4612 common.cpp:187] Check failed: error == cudaSuccess (10 vs. 0) invalid device ordinal
*** Check failure stack trace: ***
Setting device 14
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1221 23:06:48.096205 4615 common.cpp:187] Check failed: error == cudaSuccess (10 vs. 0) invalid device ordinal
*** Check failure stack trace: ***
Setting device 15
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1221 23:06:48.302572 4620 common.cpp:187] Check failed: error == cudaSuccess (10 vs. 0) invalid device ordinal
*** Check failure stack trace: ***
Setting device 16
Setting device 17

can you help me? thank you very much.

about multiple scale input

From your TSN paper, you said that you used scale jittering for the inputs.
However, from your code in ./pyActionRecog/action_caffe.py, I think you just put multiple scale input in RGB image in the function of predict_single_frame. However for the optical flow input, you did not add multiple scale input in the function of predict_single_flow_stack.

Here comes another question. For the multiple input crop, you just use {256, 224, 192, 168} as you said in the paper, right? What is the performance improvement with the scale jittering?

And besides, I have tried limin's TDD multiple scale input setting, namely, while extracting CNN features for each video, I just use multiple scale input and multiple input blob size correspondingly. I copied limin's setting, namely, (480,640), (340,454), (240,320), (170,227), (120,160), however, this settings get problems in the inception layer. Because the input block sizes for inception module is mismatched. Do you have any suggestions for this kind of multiple input size setting?

the image extraction part does not get the images

Problem: the image extraction part does not get the images.

The denseflow library is build correctly.
And after run bash scripts/extraction_optical_flow.sh SRC_FOLDER OUT_FOLDER NUM_WORKER. I got the correct image paths but did not get the correct rgb and optical flow images.

I checked the code, and I guess it is tools/build_of.py in charge of the image extraction mission. And I guess it must be build/extract_gpu for the image extraction task. The running shell outputs the content of print '{} {} done.format(vid_id, vid_name)

Cause the build/extract_gpu is binary. I do not know how to check the details.

Low training accuracy and Errors when test videos

Hi, Mr. Xiong!

  1. Now I can train the TSN on the dataset hmdb51_split_1_rgb with the default parameters using bash scripts/train_tsn.sh ucf101 rgb . After 2500 iteration, the train net loss = 0.17426, the test accuracy = 0.530679 and test loss = 2.30292.

Which parameters may affect this test accuracy, i.e., how could I get a similar accuracy as your work?

  1. I met some problems when I test the TSN Network. As the file Readme, I input the code
    python tools/eval_net.py hmdb51 1 rgb /raid/Public/zhoup_DS/HMDB51/TSN_hmdb51_frames/ models/hmdb51/tsn_bn_inception_rgb_deploy.prototxt models/hmdb51_split_1_tsn_rgb_reference_bn_inception.caffemodel --num_worker 3 --save_scores ZhouExample/hmdb51_split1_test.txt

However, it goes wrong with the following output:

....
6400 videos parsed
6600 videos parsed
frame folder analysis done
[libprotobuf ERROR google/protobuf/text_format.cc:245] Error parsing text-format caffe.NetParameter: 19:12: Message type "caffe.LayerParameter" has no field named "bn_param".
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1216 09:33:18.682155 27370 upgrade_proto.cpp:79] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: models/hmdb51/tsn_bn_inception_rgb_deploy.prototxt
*** Check failure stack trace: ***
[libprotobuf ERROR google/protobuf/text_format.cc:245] Error parsing text-format caffe.NetParameter: 19:12: Message type "caffe.LayerParameter" has no field named "bn_param".
......
*** Check failure stack trace: ***
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1216 09:35:24.345011 27464 common.cpp:169] Check failed: error == cudaSuccess (10 vs. 0) invalid device ordinal
....

When I check the cuda using TSN_master/lib/caffe-action/build/install/bin$ ldd caffe | grep cuda , it shows like

libcudart.so.7.5 => /usr/local/cuda/lib64/libcudart.so.7.5 (0x00007f387598e000)
libcurand.so.7.5 => /usr/local/cuda/lib64/libcurand.so.7.5 (0x00007f3872125000)
libcublas.so.7.5 => /usr/local/cuda/lib64/libcublas.so.7.5 (0x00007f3870846000)
libcudnn.so.5 => /usr/local/cuda/lib64/libcudnn.so.5 (0x00007f386ccfb000)

Have you met the errors? Could you give me some advice about that?

Thank you and looking for your reply. Best wishes!

Different number of segments

Hi,

I'm trying to change the number of segments but changing the numbers in data layer and loss layer does not help!. Could you please provide information regarding this.

Thanks,

Issue with HMDB dataset training

Hi

I was trying to train the temporal segment networks with HMDB dataset. I see the following error. I used the extraction script mentioned in the repo to get hmdb frames. Could you please take a look at this error.

I0914 00:56:45.781002 23644 net.cpp:732] [Update] Layer fc-action, param 1 data: 0; diff: 0.000178426
I0914 00:56:45.985848 23644 solver.cpp:616] Gradient clipping: scaling down gradients (L2 norm 43.1335 > 20) by scale factor 0.463677
F0914 00:56:46.026162 25274 data_transformer.cpp:314] Check failed: width <= datum_width (224 vs. 176)
*** Check failure stack trace: ***

Thanks!

extract_warp_gpu

I can't found the extract_warp_gpu from dense_flow that you provided

question_of_TSN_Accuracy_On_UCF101_split1

Hi,Thanks for your sharing!
I get the accuracy is 87.36% with the TSN architecture and your model and deploy file and your caffe code,
but your paper say the the accuracy is 87.9%, I set the num_sample to be 25 and did not use the multi_scale in the predict_single_flow_stack function in the eval_net.py( https://github.com/yjxiong/temporal-segment-networks/blob/master/tools/eval_net.py).
BTW, the server is equipped with k80 and cuda7.5 and cudnn4.0, i downsample the video by getting 1 frame per 15 frames. In the eval_net.py line121, i use the sliding_window_aggregation_func rather than the default_aggregation_func, i think that the funxtion of sliding_window_aggregation_func is to make use of the TSN,right?
Could you please tell me the reason? Thank you very much!

I am strong interset in TSN!

i am a green hand in Deep Action Recognition. and i have no any experience in the field. i want to know install caffe if or not need opencv? and what version is it!

The frozen parameter in BN layer

In [https://github.com/yjxiong/temporal-segment-networks/blob/master/models/ucf101/tsn_bn_inception_rgb_train_val.prototxt]

why the first frozen is false [https://github.com/yjxiong/temporal-segment-networks/blob/master/models/ucf101/tsn_bn_inception_rgb_train_val.prototxt#L59]

The following forzen are all true?

What's the purpose of this parameter?

Thanks.

does multiple caffe require all the same tpye of gpu?

thank you for your excellent job,but i have a question about your multiple caffe:
like i have two gpu,Quadro K4200 and gtx 1080,does your multiple caffe compile and work fine on one computer which have different type of gpu?
thany u in advance!

test script error

I just did the build instruction as you state in the README. However, I encountered the building errors below. It is weird because I have checked the caffe.proto file as in the lib folder, which has the field named "bn_param" for LayerParameter.

By the way, the only difference bewteen your instruction and mine is that I did not use Anaconda for python.

[libprotobuf ERROR google/protobuf/text_format.cc:245] Error parsing text-format caffe.NetParameter: 19:12: Message type "caffe.LayerParameter" has no field named "bn_param".
WARNING: Logging before InitGoogleLogging() is written to STDERR
F1101 09:15:29.878906 18368 upgrade_proto.cpp:928] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: models/ucf101/tsn_bn_inception_rgb_deploy.prototxt
*** Check failure stack trace: ***

Epoch size

Hi,

I'm fine tuning the TSN for action recognition in another dataset (only rgb data). I'd like to know how to estimate the number of iterations in an epoch.

By reading the paper and checking the train_val. I'm guessing that at least N*batchsize images are feed into the network every iteration ( where N is 'num_segment'). If I were right, an epoch would be: V/batchsize, where V is the total amount of videos.

However the param 'scale_ratios' and the amount of memory the GPU uses, makes me guess that, more than N*batchsize images are feed into the network every iteration. Can you help me on this one?

Thanks
Fuanka

cudn_version_disable_cudnn_effect

   Hi, i have two versions of CUDA in my server : CUDA7.0 and CUDA7.5 .  
   When it  configured with CUDA7.5  , cmake shows that OpenCV static library was compiled with CUDA 7.0 support.  
   Please, use the same version or rebuild OpenCV with CUDA 7.5. 
   I do not have the right to rebuild the opencv. so  i have to use the CUDA 7.0,cmake is ok,
   But make occurs errors: lib/caffe-action/include/caffe/util/cudnn.hpp:94:3: error: too many arguments to function ‘cudnnStatus_t cudnnSetFilter4dDescriptor(cudnnFilterDescriptor_t, cudnnDataType_t, int, int, int, int)’,
   It likes the cudnn version, could you please give me a solution?
   Now, i disable the cudnn and make the code successfully configured with cuda7.0, 
   However ,i know disable the cudnn will slow the trainning ,Will  it effect the accuary?
   I also have a question,the step extract the optical flow is obtain each optical flow or down sample and how down sample?

How to compute the value 'test_iter'

Hi Mr. Xiong and Wang! @yjxiong @wanglimin
In models/hmdb51/tsn_bn_inception_rgb_solver.prototxt, how do you get the value of test_iter: 383 ?
In models/hmdb51/tsn_bn_inception_rgb_train_val.prototxt, we can find the Test parameter:
layer {

name: "data"
type: "VideoData"
top: "data"
top: "label"
video_data_param {
source: "data/hmdb51_rgb_val_split_1.txt"
batch_size: 1
new_length: 1
num_segments: 3
modality: RGB
name_pattern: "img_%05d.jpg"
}
transform_param{
crop_size: 224
mirror: false
mean_value: [104, 117, 123, 104, 117, 123, 104, 117, 123]
}
include: { phase: TEST }
}

In the obtained file data/hmdb51_rgb_val_split_1.txt, the total number of test-videos is 1530.
It really confused me about the value of test_iter.
When I fine-tune TSN, it does not work and tells the following errors:
F0224 14:23:00.519636 15726 reshape_layer.cpp:79] Check failed: 0 == bottom[0]->count() % explicit_count (0 vs. 192) bottom count (192) must be divisible by the product of the specified dimensions (303)

*** Check failure stack trace: ***
@ 0x7f256daeddaa (unknown)
@ 0x7f256daedce4 (unknown)
@ 0x7f256daed6e6 (unknown)
@ 0x7f256daf0687 (unknown)
@ 0x7f256dedea0f caffe::ReshapeLayer<>::Reshape()
@ 0x7f256df6b360 caffe::Net<>::Init()
@ 0x7f256df6e0b3 caffe::Net<>::Net()
@ 0x7f256df80a45 caffe::Solver<>::InitTrainNet()
@ 0x7f256df80fa4 caffe::Solver<>::Init()
@ 0x7f256df813d5 caffe::Solver<>::Solver()
@ 0x41cbe6 caffe::SGDSolver<>::SGDSolver()
@ 0x41cd88 caffe::GetSolver<>()
@ 0x413128 train()
@ 0x410fdd main
@ 0x7f256c5a7f45 (unknown)
@ 0x4113de (unknown)
@ (nil) (unknown)
Setting device 0

Is that because of a wrong number of test_iter?

Thanks for your attention!!!

FAQ pape

I plan to have a FAQ page to list all the questions I have been repeatedly asked during the past year about our preliminary tech-report.

Question about the parameters

Hi @yjxiong @wanglimin ,
Congrats to you excellent work.
Did you only extract only 1 RGB frame and 5 stacked optical flow frames in one snippet? Since I notice that the new_length of RGB is 1 and the new_length of optical flow is 5. Morerover, why did you choose the test size 950 instead of the whole number of testing set? Can your caffe framework run with multi-GPUs of different types (e.g one TITANX and one GTX1080)? Thanks.

Best,
-L

The loss does not decrease during training a flow model

Hi,
I tried to train a optical flow model, follow this command:

bash scripts/train_tsn.sh ucf101 flow

But the loss does not decrease during training TSN. Any suggestions? By the way the rgb model is OK!

I0905 10:36:53.966650 24339 solver.cpp:240] Iteration 20, loss = 4.61511
I0905 10:36:53.966797 24339 solver.cpp:255]     Train net output #0: loss = 4.615 (* 1 = 4.615 loss)
I0905 10:36:53.966809 24339 solver.cpp:631] Iteration 20, lr = 0.005
I0905 10:37:39.902165 24339 solver.cpp:240] Iteration 40, loss = 4.61494
I0905 10:37:39.902279 24339 solver.cpp:255]     Train net output #0: loss = 4.61514 (* 1 = 4.61514 loss)
I0905 10:37:39.902290 24339 solver.cpp:631] Iteration 40, lr = 0.005
I0905 10:38:25.729789 24339 solver.cpp:240] Iteration 60, loss = 4.61493
I0905 10:38:25.729914 24339 solver.cpp:255]     Train net output #0: loss = 4.6159 (* 1 = 4.6159 loss)
I0905 10:38:25.729926 24339 solver.cpp:631] Iteration 60, lr = 0.005
I0905 22:13:58.890894 24339 solver.cpp:240] Iteration 19820, loss = 4.61208
I0905 22:13:58.891036 24339 solver.cpp:255]     Train net output #0: loss = 4.62788 (* 1 = 4.62788 loss)
I0905 22:13:58.891048 24339 solver.cpp:631] Iteration 19820, lr = 5e-05
I0905 22:14:39.989903 24339 solver.cpp:240] Iteration 19840, loss = 4.60292
I0905 22:14:39.990012 24339 solver.cpp:255]     Train net output #0: loss = 4.65203 (* 1 = 4.65203 loss)
I0905 22:14:39.990025 24339 solver.cpp:631] Iteration 19840, lr = 5e-05
I0905 22:15:21.071648 24339 solver.cpp:240] Iteration 19860, loss = 4.59892
I0905 22:15:21.071779 24339 solver.cpp:255]     Train net output #0: loss = 4.63011 (* 1 = 4.63011 loss)
I0905 22:15:21.071791 24339 solver.cpp:631] Iteration 19860, lr = 5e-05

Should I modify the solver file (tsn_bn_inception_flow_solver.prototxt)? change the learning rate?
Thanks!

build_all.sh error!!!please help--/usr/bin/ld: cannot find -lopencv_dep_cudart

when i do bash build_all.sh,
i get the wrong message:
one:
[ 71%] Linking CXX shared library libpydenseflow.so
/usr/bin/ld: cannot find -lopencv_dep_cudart
collect2: error: ld returned 1 exit status
make[2]: *** [extract_gpu] Error 1
this happened,when do the "Building Dense Flow",i read the build_all.sh and i delete the code after "build opencv"part ,opencv build OK like this:
[ 99%] Built target opencv_test_contrib
[100%] Built target opencv_python
OpenCV 2.4.12 built
but when i add the "Building Dense Flow" part or run "Caffe Built"indenpendently,i got the same error,this is when i skip "Building Dense Flow" part and do"Caffe Built"part :
-- Configuring done
-- Generating done
-- Build files have been written to: /home/xuliang/temporal-segment-networks/lib/caffe-action/build
[ 1%] Built target proto
Scanning dependencies of target caffe
[ 1%] Linking CXX shared library ../../lib/libcaffe.so
/usr/bin/ld: cannot find -lopencv_dep_cudart
collect2: error: ld returned 1 exit status
make[2]: *** [lib/libcaffe.so] Error 1
make[1]: *** [src/caffe/CMakeFiles/caffe.dir/all] Error 2
make: *** [all] Error 2
Caffe Built
my machine is centos,and iknow the code in build_all.sh "apt-get" don't work,so i have install the dependences in yum,i still try to use opecv2.4.9 or 2.4.13,errors are the same,would do please help me?thank u!!
problem seems to happend to opencv because -lopencv_dep_cudart error ,but i don't know how to fix it.

Requirements to "Extract Frames and Optical Flow Images"

I am trying to follow the instructions under "Extract frames and optical flow images", but get empty results.

For example if in root dir I have UCF-101/{results of unrar} and a directory "UCF_flow_frames" I am running:
bash scripts/extract_optical_flow.sh UCF-101/ UCF_flow_frames/ 2

The results is that UCF_flow_frames is filled with an empty folder for each video from the UCF-101 dataset.

If it is relevant - I run it without the model files.

Hyperparameters for experiments

The number of iterations and decay schedule in the solvers are different from the papers. Should I expect the ones in the solvers to be able to reproduce the best performance?

Also, do you train the HMDB flow model initialized from UCF101 model or just use the provided init model?

Thanks!

flow modality

2016-12-19 20-37-09
2016-12-19 20-37-38

OK! I asked you about the issue that I can't get the correct result. And I re-downloaded the bn_inception_flow_init.caffemodel with the command get_init_models.sh .
I checked its properties which modified on August 30th.
Will I download the correct?
2016-12-20 20-51-52
@yjxiong Very much looking forward to your answer.

Specify which GPUs to be used in training

Hi guys:

I have 4 GPU cores with 12G memory. When I run the training command with multi-core setting,
an out-of-memory error was encountered: Check failed: error == cudaSuccess (2 vs. 0) out of memory

It was caused by my first core, most of the memory of it is occupied by another process.
I tried the command CUDA_VISIBLE_DEVICES=1,2,3 to avoid accessing first core, but it doesn't work. The training process always tries to access my first core and cause the error.

The training configuration on the README.md only provides the way to specify how many cores to be used but not which cores to be used.


Environment:

Ubuntu14.04 in nvidia-docker
cuDNN V5
K80 4 GPU cores

Any suggestion? :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.