Giter VIP home page Giter VIP logo

dancetrack's Introduction

DanceTrack

DanceTrack is a benchmark for tracking multiple objects in uniform appearance and diverse motion.

DanceTrack provides box and identity annotations. It contains 100 videos, 40 for training(annotations public), 25 for validation(annotations public) and 35 for testing(annotations unpublic). For evaluating on test set, please see CodaLab(Old CodaLab). We also have a Project Page for exhibition.


Paper (CVPR2022)

DanceTrack: Multi-Object Tracking in Uniform Appearance and Diverse Motion

News

Paper List

Title Intro Description Links
SUSHI Unifying Short and Long-Term Tracking with Graph Hierarchies [Github]
MOTRv2 MOTRv2: Bootstrapping End-to-End Multi-Object Tracking by Pretrained Object Detectors [Github]
MOT_FCG Multiple Object Tracking from appearance by hierarchically clustering tracklets Multiple Object Tracking from appearance by hierarchically clustering tracklets [Github]
OC-SORT Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking [Github]
StrongSORT StrongSORT: Make DeepSORT Great Again [Github]
MOTR MOTR: End-to-End Multiple-Object Tracking with TRansformer [Github]

Dataset

Download the dataset from Google Drive or Baidu Drive (code:awew).

Organize as follows:

{DanceTrack ROOT}
|-- dancetrack
|   |-- train
|   |   |-- dancetrack0001
|   |   |   |-- img1
|   |   |   |   |-- 00000001.jpg
|   |   |   |   |-- ...
|   |   |   |-- gt
|   |   |   |   |-- gt.txt            
|   |   |   |-- seqinfo.ini
|   |   |-- ...
|   |-- val
|   |   |-- ...
|   |-- test
|   |   |-- ...
|   |-- train_seqmap.txt
|   |-- val_seqmap.txt
|   |-- test_seqmap.txt
|-- TrackEval
|-- tools
|-- ...

We align our dataset annotations with MOT, so each line in gt.txt contains:

<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, 1, 1, 1

Evaluation

We use ByteTrack as an example of using DanceTrack. For training details, please see instruction. We provide the trained models in Google Drive or or Baidu Drive (code:awew).

To do evaluation with our provided tookit, we organize the results of validation set as follows:

{DanceTrack ROOT}
|-- val
|   |-- TRACKER_NAME
|   |   |-- dancetrack000x.txt
|   |   |-- ...
|   |-- ...

where dancetrack000x.txt is the output file of the video episode dancetrack000x, each line of which contains:

<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, -1, -1, -1

Then, simply run the evalution code:

python3 TrackEval/scripts/run_mot_challenge.py --SPLIT_TO_EVAL val  --METRICS HOTA CLEAR Identity  --GT_FOLDER dancetrack/val --SEQMAP_FILE dancetrack/val_seqmap.txt --SKIP_SPLIT_FOL True   --TRACKERS_TO_EVAL '' --TRACKER_SUB_FOLDER ''  --USE_PARALLEL True --NUM_PARALLEL_CORES 8 --PLOT_CURVES False --TRACKERS_FOLDER val/TRACKER_NAME 
Tracker HOTA DetA AssA MOTA IDF1
ByteTrack 47.1 70.5 31.5 88.2 51.9

Besides, we also provide the visualization script. The usage is as follow:

python3 tools/txt2video_dance.py --img_path dancetrack --split val --tracker TRACKER_NAME

Competition

Organize the results of test set as follows:

{DanceTrack ROOT}
|-- test
|   |-- tracker
|   |   |-- dancetrack000x.txt
|   |   |-- ...

Each line of dancetrack000x.txt contains:

<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, -1, -1, -1

Archive tracker folder to tracker.zip and submit to CodaLab. Please note: (1) archive tracker folder, instead of txt files. (2) the folder name must be tracker.

The return will be:

Tracker HOTA DetA AssA MOTA IDF1
tracker 47.7 71.0 32.1 89.6 53.9

For more detailed metrics and metrics on each video, click on download output from scoring step in CodaLab.

Run the visualization code:

python3 tools/txt2video_dance.py --img_path dancetrack --split test --tracker tracker

Joint-Training

We use joint-training with other datasets to predict mask, pose and depth. CenterNet is provided as an example. For details of joint-trainig, please see joint-training instruction. We provide the trained models in Google Drive or Baidu Drive(code:awew).

For mask demo, run

cd CenterNet/src
python3 demo.py ctseg --demo  ../../dancetrack/val/dancetrack000x/img1 --load_model ../models/dancetrack_coco_mask.pth --debug 4 --tracking 
cd ../..
python3 tools/img2video.py --img_file CenterNet/exp/ctseg/default/debug --video_name dancetrack000x_mask.avi

For pose demo, run

cd CenterNet/src
python3 demo.py multi_pose --demo  ../../dancetrack/val/dancetrack000x/img1 --load_model ../models/dancetrack_coco_pose.pth --debug 4 --tracking 
cd ../..
python3 tools/img2video.py --img_file CenterNet/exp/multi_pose/default/debug --video_name dancetrack000x_pose.avi

For depth demo, run

cd CenterNet/src
python3 demo.py ddd --demo  ../../dancetrack/val/dancetrack000x/img1 --load_model ../models/dancetrack_kitti_ddd.pth --debug 4 --tracking --test_focal_length 640 --world_size 16 --out_size 128
cd ../..
python3 tools/img2video.py --img_file CenterNet/exp/ddd/default/debug --video_name dancetrack000x_ddd.avi

Agreement

  • The annotations of DanceTrack are licensed under a Creative Commons Attribution 4.0 License.
  • The dataset of DanceTrack is available for non-commercial research purposes only.
  • All videos and images of DanceTrack are obtained from the Internet which are not property of HKU, CMU or ByteDance. These three organizations are not responsible for the content nor the meaning of these videos and images.
  • The code of DanceTrack is released under the MIT License.

Acknowledgement

The evaluation metrics and code are from MOT Challenge and TrackEval. The inference code is from ByteTrack. The joint-training code is modified from CenterTrack and CenterNet, where the instance segmentation code is from CenterNet-CondInst. Thanks for their wonderful and pioneering works !

Citation

If you use DanceTrack in your research or wish to refer to the baseline results published here, please use the following BibTeX entry:

@article{peize2021dance,
  title   =  {DanceTrack: Multi-Object Tracking in Uniform Appearance and Diverse Motion},
  author  =  {Peize Sun and Jinkun Cao and Yi Jiang and Zehuan Yuan and Song Bai and Kris Kitani and Ping Luo},
  journal =  {arXiv preprint arXiv:2111.14690},
  year    =  {2021}
}

dancetrack's People

Contributors

ifzhang avatar noahcao avatar peizesun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

dancetrack's Issues

如何得到hota指标

你好,请问如何在mot17上得到你readme上所说的hota指标,运行你给的代码只有IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP IDt IDa IDm num_objects

Better metrics for association.

Nice dataset! As association appears to be a key focus, it would be great if the owners of the dataset consider computing and publishing the ATA from localmot on the non-public annotation. It better captures association errors as it is less correlated with detection performance than HOTA, IDF1 and AssA.

Test server is down

It looks like CodaLab is down and the competition page cannot be opened. Do the maintainers receive more information from CodaLab about the issue?

Cuda out of memory

Hello,When I train the model and set the batch size to 2 , it also cuda memory out .
Does anyone can help me?

I try torch.cuda.empty_cache() , but this is not working for me.

meta data

the width of video val/dancetrack0041 is 1920, not 1440

Dataset Statistic

Thank you for your excellent work. Is the motion pattern code not provided in the dataset analysis? Can you share this part of the code?

visualization problem

@noahcao Hi!When I run this command in terminal,
python tools/txt2video_dance.py --img_path dancetrack --split val --tracker yolox_dancetrack_val,
encountered this error:
cv2.rectangle(img, (int(bbox[0]), int(bbox[1])), (int(bbox[2]), int(bbox[3])), color_list[bbox[4]%79].tolist(), thickness=6)
IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices
How should I solve it?Looking forward to your reply.

Other tracking algorithms

Hello. Thank you for your astonishing work!

By the way, would you upload codes for other tracking algorithms (MOTR, TraDes, TransTrack, QDTrack, and so on) described in the paper?

Thank you.

submit test results

I submit the results on the DanceTrack website, but it keeps showing "Sorry, failed to upload file." I have strictly followed the submission format.

segmentation image size

When I used the mentioned test method to segment other data, it was obvious that there were two line. How to solve this problem? The size of my input picture is 640*480. At the same time, I use the camera to make the segmented interval more obvious
2022-11-10 20-44-09屏幕截图

Re-Evaluate QDTrack on Dancetrack

Hello :)

I tried to re-evaluate QDtrack using the weights you provided but I cannot reproduce the results you reported in your paper. I get the following:

HOTA 45.8 / MOTA 83.0 / IDF1 44.8

Can you please tell me what I need to do to reproduce the results?

Best,
Jenny

I submitted results for DanceTrack benchmark. However, in default values the BENCHMARK: MOT17, why ? Also If i would like to change the THRESHOLD value to some other value, How can I do that ?

Eval Config:
USE_PARALLEL : True
NUM_PARALLEL_CORES : 8
BREAK_ON_ERROR : True
RETURN_ON_ERROR : False
LOG_ON_ERROR : /tmp/codalab/tmp7ocVCp/run/program/TrackEval/error_log.txt
PRINT_RESULTS : True
PRINT_ONLY_COMBINED : False
PRINT_CONFIG : True
TIME_PROGRESS : True
DISPLAY_LESS_PROGRESS : False
OUTPUT_SUMMARY : True
OUTPUT_EMPTY_CLASSES : True
OUTPUT_DETAILED : True
PLOT_CURVES : False

MotChallenge2DBox Config:
PRINT_CONFIG : True
GT_FOLDER : /tmp/codalab/tmp7ocVCp/run/program/dancetrack/test
TRACKERS_FOLDER : /tmp/codalab/tmp7ocVCp/run/input
OUTPUT_FOLDER : /tmp/codalab/tmp7ocVCp/run/output
TRACKERS_TO_EVAL : ['res']
CLASSES_TO_EVAL : ['pedestrian']
BENCHMARK : MOT17
SPLIT_TO_EVAL : test
INPUT_AS_ZIP : False
DO_PREPROC : True
TRACKER_SUB_FOLDER : tracker
OUTPUT_SUB_FOLDER :
TRACKER_DISPLAY_NAMES : None
SEQMAP_FOLDER : None
SEQMAP_FILE : /tmp/codalab/tmp7ocVCp/run/program/dancetrack/test_seqmap.txt
SEQ_INFO : None
GT_LOC_FORMAT : {gt_folder}/{seq}/gt/gt.txt
SKIP_SPLIT_FOL : True

CLEAR Config:
METRICS : ['HOTA', 'CLEAR', 'Identity']
THRESHOLD : 0.5
PRINT_CONFIG : True

Identity Config:
METRICS : ['HOTA', 'CLEAR', 'Identity']
THRESHOLD : 0.5
PRINT_CONFIG : True

hota.py如何调用

您好!请教下怎么实例化hota类啊,我看您给出了hota.py的代码,但是整个项目里好像没有调用hota的过程,所以不知道这个是怎么调用的,我想在自己的数据集上测一下hota这个指标。期待您的回复,谢谢!

Inclusion Request: Hybrid-SORT: Weak Cues Matter for Online Multi-Object Tracking

Dear DanceTrack Team,

I hope this message finds you well. I am writing to request the inclusion of our recently accepted paper, titled "Hybrid-SORT: Weak Cues Matter for Online Multi-Object Tracking," in the README of DanceTrack.

Our paper has been accepted at the AAAI 2024 conference and presents a novel approach that combines the strengths of commonly used strong cues (i.e., spatial information and appearance information) and newly introduced weak cues (i.e., confidence state and height state).

We believe that the DanceTrack community would benefit from being aware of our work, as it introduces significant advancements in the field. This would ensure that researchers and developers using DanceTrack are aware of the state-of-the-art methods and can benefit from the insights presented in our paper.

Please find below the citation details for our paper:

Title: Hybrid-SORT: Weak Cues Matter for Online Multi-Object Tracking
Conference: AAAI 2024
Arxiv: https://arxiv.org/abs/2308.00783
GitHub: https://github.com/ymzis69/HybridSORT

Thank you for your attention, and we look forward to your positive response.

Best regards.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.