Giter VIP home page Giter VIP logo

deepmot's Introduction

CVPR2020: How To Train Your Deep Multi-Object Tracker

License: LGPL v3

News: We release the code for training and testing DeepMOT-Tracktor and the code for training DHN. Please visit: https://gitlab.inria.fr/robotlearn/deepmot

How To Train Your Deep Multi-Object Tracker
Yihong Xu, Aljosa Osep, Yutong Ban, Radu Horaud, Laura Leal-Taixé, Xavier Alameda-Pineda
[Paper]

Bibtex

If you find this code useful, please star the project and consider citing:

@inproceedings{xu2020train,
  title={How To Train Your Deep Multi-Object Tracker},
  author={Xu, Yihong and Osep, Aljosa and Ban, Yutong and Horaud, Radu and Leal-Taix{\'e}, Laura and Alameda-Pineda, Xavier},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={6787--6796},
  year={2020}
}

Environment setup

This code has been tested on Ubuntu 16.04, Python 3.6, Pytorch=0.4.1, CUDA 9.2, GTX 1080Ti, Titan X, and RTX Titan GPUs.

Warning: the results can be slightly different due to Pytorch version and CUDA version.

  • Clone the repository
git clone [email protected]:yixu/deepmot.git && cd deepmot

Option 1:

  • Follow the installation instructions in Tracktor.

Option 2 (recommended):

we provide a Singularity image with all packages pre-installed (similar to Docker) for training and testing.

singularity shell --nv --bind yourLocalPath:yourPathInsideImage tracker.sif

- -bind: to link a singularity path with a local path. By doing this, you can find data from local PC inside Singularity image;
- -nv: use the local Nvidia driver.

Testing

  • Setup your environment

  • Go to the test_tracktor folder

  • Download MOT data Dataset can be downloaded here: MOT17Det, MOT16Labels, MOT16-det-dpm-raw and MOT17Labels . 2. Unzip all the data by executing:

    unzip -d MOT17Det MOT17Det.zip
    unzip -d MOT16Labels MOT16Labels.zip
    unzip -d 2DMOT2015 2DMOT2015.zip
    unzip -d MOT16-det-dpm-raw MOT16-det-dpm-raw.zip
    unzip -d MOT17Labels MOT17Labels.zip
    
  • Enter the data path to data_pth in the test_tracktor/experiments/cfgs/tracktor_pub_reid.yaml and test_tracktor/experiments/cfgs/tracktor_private.yaml

  • Download pretrained models all the pretrained models can be downloaded here:
    deepMOT-Tracktor.pth (google drive) or
    deepMOT-Tracktor.pth (tencent cloud)

  • Enter the model path to parameter obj_detect_weights in the test_tracktor/experiments/cfgs/tracktor_pub_reid.yaml and test_tracktor/experiments/cfgs/tracktor_private.yaml

  • Set the dataset name in the test_tracktor/experiments/cfgs/tracktor_pub_reid.yaml and test_tracktor/experiments/cfgs/tracktor_private.yaml:
    For MOT17 (by default):

dataset: mot17_train_17

For MOT16 (images as the same as MOT17):

dataset: mot17_all_DPM_RAW16
  • run tracking code
python test_tracktor/experiments/scripts/tst_tracktor_private.pytst_tracktor_pub_reid.py (public detections) or test_tracktor/experiments/scripts/tst_tracktor_private.py (private detections)

The results are saved by default under test_tracktor/output/log/, you can modify it by changing output_dir in the test_tracktor/experiments/cfgs/tracktor_pub_reid.yaml and test_tracktor/experiments/cfgs/tracktor_private.yaml.

  • Visualization:
    You can set write_images: True in the test_tracktor/experiments/cfgs/tracktor_pub_reid.yaml and test_tracktor/experiments/cfgs/tracktor_private.yaml to plot and save images. By default, they will be saved inside test_tracktor/output/log/ if write_images: True.

Training

  • Setup your environment

  • Go to the train_tracktor folder

  • Download MOT Dataset can be downloaded here: MOT17Det, MOT16Labels, MOT16-det-dpm-raw and MOT17Labels.

  • Unzip all the data by executing:

    unzip -d MOT17Det MOT17Det.zip
    unzip -d MOT16Labels MOT16Labels.zip
    unzip -d 2DMOT2015 2DMOT2015.zip
    unzip -d MOT16-det-dpm-raw MOT16-det-dpm-raw.zip
    unzip -d MOT17Labels MOT17Labels.zip
    
  • Enter the data path to data_pth in the train_tracktor/experiments/cfgs/tracktor_full.yaml

  • Download the output folder containing the configurations and the model to be fine-tuned and DHN pre-trained model:
    output.zip (google drive) or
    output.zip (tencent cloud)

  • unzip the "output" folder and put it to train_tracktor.

  • run training code

python train_tracktor/experiments/scripts/train_tracktor_full.py

The trained models are saved by default under train_tracktor/output/log_full/ folder.
The tensorboard logs are saved by default under deepmot/logs/train_log/ folder and you can visualize your training process by:

tensorboard --logdir=YourGitFolder/train_tracktor/output/log_full/

Note:

pip install --upgrade tensorflow

Train DHN

python train_DHN/train_DHN.py --is_cuda --bidirectional

for more parameter details please run:

python train_DHN/train_DHN.py -h

By default the trained models are saved into train_DHN/output/DHN/ and log files in train_DHN/log/

your can visualize the training via tensorboard:

tensorboard --logdir=YourGitFolder/train_DHN/log/

Note:

pip install --upgrade tensorflow

Evaluation

You can run test_tracktor/experiments/scripts/evaluate.py to evaluate your tracker's performance.

  • fill the list predt_pth in the code with the folder where the results (.txt files) are saved.
  • make sure the data path is correctly set.
  • then run
python test_tracktor/experiments/scripts/evaluate.py

Results

MOT17 public detections:

dataset MOTA MOTP FN FP IDsW Total Nb. Objs
train 62.5% 91.7% 124786 887 798 336891
test 53.7% 77.2% 247447 11731 1947 564228

MOT16 public detections:

dataset MOTA MOTP FN FP IDsW Total Nb. Objs
train 58.8% 92.2% 44711 538 229 110407
test 54.8% 77.5% 78765 2955 645 182326

MOT16/17 private detections:

dataset MOTA MOTP FN FP IDsW Total Nb. Objs
train 70.0% 91.3% 32513 552 677 112297

Note:

  • the results can be slightly different depending on the running environment.

Demo

Acknowledgement

Some code is modified and network pre-trained weights are obtained from the following repositories:

Single Object Tracker: SiamRPN, Tracktor, Faster-RCNN pytorch implementation.

@inproceedings{Zhu_2018_ECCV,
  title={Distractor-aware Siamese Networks for Visual Object Tracking},
  author={Zhu, Zheng and Wang, Qiang and Bo, Li and Wu, Wei and Yan, Junjie and Hu, Weiming},
  booktitle={European Conference on Computer Vision},
  year={2018}
}

@InProceedings{Li_2018_CVPR,
  title = {High Performance Visual Tracking With Siamese Region Proposal Network},
  author = {Li, Bo and Yan, Junjie and Wu, Wei and Zhu, Zheng and Hu, Xiaolin},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2018}
}

@InProceedings{tracktor_2019_ICCV,
  author = {Bergmann, Philipp and Meinhardt, Tim and Leal{-}Taix{\'{e}}}, Laura},
  title = {Tracking Without Bells and Whistles},
  booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
  month = {October},
  year = {2019}}

@inproceedings{10.5555/2969239.2969250,
author = {Ren, Shaoqing and He, Kaiming and Girshick, Ross and Sun, Jian},
title = {Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks},
year = {2015},
publisher = {MIT Press},
address = {Cambridge, MA, USA},
booktitle = {Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1},
pages = {91–99},
numpages = {9},
location = {Montreal, Canada},
series = {NIPS’15}
}

MOT Metrics in Python: py-motmetrics
Appearance Features Extractor: DAN

@article{sun2018deep,
  title={Deep Affinity Network for Multiple Object Tracking},
  author={Sun, ShiJie and Akhtar, Naveed and Song, HuanSheng and Mian, Ajmal and Shah, Mubarak},
  journal={arXiv preprint arXiv:1810.11780},
  year={2018}
}

Training and testing Data from:
MOT Challenge: motchallenge

@article{MOT16,
    title = {{MOT}16: {A} Benchmark for Multi-Object Tracking},
    shorttitle = {MOT16},
    url = {http://arxiv.org/abs/1603.00831},
    journal = {arXiv:1603.00831 [cs]},
    author = {Milan, A. and Leal-Taix\'{e}, L. and Reid, I. and Roth, S. and Schindler, K.},
    month = mar,
    year = {2016},
    note = {arXiv: 1603.00831},
    keywords = {Computer Science - Computer Vision and Pattern Recognition}
}

deepmot's People

Contributors

banyutong avatar magamig avatar yihongxu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepmot's Issues

Objects in the training dataset

Is 'Car/Vehicle' one of your training dataset as well? I am using MOT17 dataset for my research as well but I am not sure if it includes Cars/Vehicles.

Questions on the dataset for DHN and training process

Hi, thanks for uploading the training code for DHN!
I have some questions for DHN:

  1. How did you create the train and test split? On the paper, it said that you had 114,483 and 17,880 for training and testing splits respectively, but when I ran your code, the validation and train lengths for 1 batch size were 1998 and 11870.
  2. While training, I saw precision and recall were steadily increasing, but weighted accuracies were 0.00% throughout. Could there be a bug in your code?

Problem in training and testing DeepMOT

I need to train DeepMOT on my custom dataset. Is it necessary to train Deep Affinity Network on the custom dataset?. For training the Deep Hungarian network is there code available?

Problem about track_on_mot.py

Thank you for your sharing. However, in tracking_on_mot.py, your implementation of siameseRPN is not parallel, which lead to very slow inference time. Could you provide a parallel version. Thank you.

the birth and death process

Hi,thanks for your sharing! can you give some explanation about the code of the birth and death process? the idea in paper you said is simple but the code looks so complex that I can not understand. please tell me your coding idea in a simple way,like fisrt you want to fiter bboxes with IOU,second use appreance feature....?

How to generate DHN_data on the custom data?

I want to apply your great work to my private data.
I have run your code on MOT successfully. But I met some difficulties when generating DHN_data on my private data.
I have read your paper but am still confused.
Could you explain how to generate the DHN_data on MOT in detail and provide a README for the DHN_data?
Could you please share the code for generating the DHN_data?
Thanks! @yihongXU

verison request

Hi, Xu! Thanks for sharing your work.

Could you please offer the version of the python, pytroch and torchvison in this project?
And also the verisons in Tracktor(from phil bergmann) : torch=0.3.1, torchvion=0.2.0. Is that compatible with deepmot?

Thanks

How to train DHN?

Hi,

How did you go about training DHN? From my understanding of the paper, you trained it as a standalone module (by itself, without the tracker and differentiable MOTA and MOTP, is that correct?)

Also, according to the paper, you used the focal loss as the supervisory signal for training the DHN. How did you compute for focal loss?

Thank you.

why DHN?

Hi, thanks for your great work!
One simple question about DHN:

You claim that DHN enables end-to-end training of deep multi-object trackers, but DHN is pretrained and fixed during training.

My question is: why not use hungarian matching? In my opinion, it always gives you the exactly right matching results. And I cannot find the comparisons of hungarian matching with DHN (from the aspects of speed/performance) in your ablations.
I've also observed that DHN (with two lstms) is much slower than Hungarian matching that runs on CPU in my project, so the speed issue may not be the reason I think.

Please correct me if I am making mistakes here! :)

ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead.

The requirements.txt says that I should install Pytorch 1.3. But when I run test_tracktor/experiments/scripts/tst_tracktor_private.py, it said ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead., I tried to downgrade pytorch to 0.4.1 after asking on stackoverflow for help. But more errors raised!
I've commented out from torch.utils.ffi import _wrap_function in test_tracktor/src/frcnn/frcnn/nms/_ext/nms/__init__.py,and it raised a new error: ImportError: test_tracktor/src/frcnn/frcnn/nms/_ext/nms/_nms.so: undefined symbol: state.
I think it seem to be someting wrong on _nms.so, but I have no more solutions to fix this issue.

Best regards!

request

Hi Yihong Xu. I would like it to be updated. I would like you to make a note on how it will work. good work

How to restrict the coordinates for new ID generation

First, thanks for your great jobs and share.

I want to use your code in my research and want to restrict coordinates where new IDs are generated. (Currently, the new ID may be an implementation that can occur at any position of detections. )

If possible, please tell me how to implement it.

Not able to run the Singularity image

When I try to run the provided singularity image (tracker.sif) I get the following error:

ERROR : Unknown image format/type: tracker.sif
ABORT : Retval = 255

What can I do?

difference about DAN and DHN

notice that you use the DHN in training,
but use DAN in evaluate,
what is the difference, and how the DHN influence the tracking result ?

problem about plot_results.py

I have tried tracking_on_mot.py and train_mot.py. They both work fine.
But when i want to run plot_results.py, it only showed "loading parameters..." and no output.
image
What should i do? I only modified the curr_path in .py file.
image

thanks a lot!

Colab Notebook

Can anyone please provide a google colab / jupyter notebook for implementation?

why use DAN in inference instead of DHN?

consider you have trained the SiamTPN + DHN, and why do't use the optimal assignment matrix to get the final result, but use the DAN that have not trained together?

traing for DHN

it seems that there is not any training code for DHN model in this project. you just optimize the sot instead of DHN.
optimizer = optim.Adam(filter(lambda p: p.requires_grad, sot_tracker.parameters()), lr=args.old_lr)

AttributeError: 'FPN' object has no attribute 'reid_branch'

Hi,
Thanks for your work.
When I run the script $ python train_tracktor/experiments/scripts/train_tracktor_full.py. There is an AttributeError: 'FPN' object has no attribute 'reid_branch', originate from the line 339 in function step_full_reid of train_tracktor/src/tracktor/tracker.py. i.e.,
gt_real_features = self.obj_detect.reid_branch(gt_features)

I check the Class FPN(FPNResNet) in script train_tracktor/src/tracktor/fpn.py,there is no function defined as 'reid_branch'.

Looking forward to your response. Thanks a lot.

the time to train?

Thank your for your work. It is first one to use SOT for MOT. I use 1080ti to train. But It is too slow. It will cost one day for six epoch. Your paper say it will use 6 hours for 20 epoch in titanxp . Is there any change in code??

Not able to launch singularity image

I am getting this error :

FATAL: container creation failed: mount /proc/self/fd/5->/usr/local/var/singularity/mnt/session/rootfs error: can't mount image /proc/self/fd/5: failed to mount squashfs filesystem: invalid argument

Kindly give an example for this statement :

singularity shell --nv --bind :_ tracker.sif

problem about tracking

Hi!Thanks for your decent work. I have a problem when I run tracking_on_mot.py as follow.Could you tell me what can I do for this problem.
image

Detection and tracking of Cars?

Hi,

Can we perform detection and tracking of cars/vehicles as well by using this model? These objects are included in the MOT17 dataset is what I have learnt. Please correct me if I am wrong. If they are included, have those object classes been included in the training as well? I am not able to figure out this by going through the train_mot.py code. Thanks for your help in advance.

About training on custom datasets

Hi, thank you very much for your contribution! Could you please share the code used to generate DHA_data? I'd like to reproduce your code, but I'm stuck here. I would be very grateful if you could!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.