Giter VIP home page Giter VIP logo

bytetrack's Introduction

Hi there ๐Ÿ‘‹

Anurag's github stats

bytetrack's People

Contributors

ak391 avatar callmesora avatar chirag4798 avatar dumbpy avatar hanguangxin avatar iamrajee avatar ifzhang avatar johnqczhang avatar kentaroy47 avatar masterbin-iiau avatar peizesun avatar pinto0309 avatar sajjadaemmi avatar skalskip avatar snehitvaddi avatar xiaopeilun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bytetrack's Issues

Train custom dataset ๏ผ๏ผ๏ผ

โ€œโ€โ€œโ€โ€œโ€โ€œโ€โ€œโ€โ€œโ€โ€œโ€โ€œโ€โ€œโ€โ€œโ€
First, you need to prepare your dataset in COCO format. You can refer to MOT-to-COCO or CrowdHuman-to-COCO. Then, you need to create a Exp file for your dataset. You can refer to the CrowdHuman training Exp file. Don't forget to modify get_data_loader() and get_eval_loader in your Exp file. Finally, you can train bytetrack on your dataset by running:
python3 tools/train.py -f exps/example/mot/your_exp_file.py -d 8 -b 48 --fp16 -o -c pretrained/yolox_x.pth
โ€œโ€โ€œโ€โ€œโ€โ€œโ€โ€œโ€โ€œโ€โ€œโ€โ€œโ€โ€œโ€โ€œโ€
Have you modified the yolox source code? Can you provide a modified file that can be trained directly (training exp file, get_data_loader () and get_eval_loader)

Use another detector

Thanks for the amazing work ! I was wondering, if I want to use my own detections from anoter detector should I keep all the detections by putting the confidence threshold and the nms at 0 ? Because I see that you are using bboxes that are included in other bboxes

No ground truth for MOT20-04, skipping.

I want to get MOTA and IDF1 scores on MOT20 by using python tools/track.py -f exps/example/mot/yolox_x_mix_mot20_ch.py -b 1 -d 1 --fp16 --fuse --match_thresh 0.7 --mot20 but failed.

Whole message:

(open-mmlab) D:\lbq\code\2_tracking\ByteTrack>python tools/track.py -f exps/example/mot/yolox_x_mix_mot20_ch.py -b 1 -d 1 --fp16 --fuse --match_thre
sh 0.7 --mot20
2021-10-20 17:59:29 | INFO     | __main__:155 - Args: Namespace(batch_size=1, ckpt='pretrained/bytetrack_x_mot20.tar', conf=0.01, devices=1, dist_ba
ckend='gloo', dist_url=None, exp_file='exps/example/mot/yolox_x_mix_mot20_ch.py', experiment_name='yolox_x_mix_mot20_ch', fp16=True, fuse=True, loca
l_rank=0, machine_rank=0, match_thresh=0.7, min_box_area=100, mot20=True, name=None, nms=0.7, num_machines=1, opts=[], seed=None, speed=False, test=
False, track_buffer=30, track_thresh=0.6, trt=False, tsize=None)
2021-10-20 17:59:30 | INFO     | __main__:165 - Model Summary: Params: 99.00M, Gflops: 985.27
2021-10-20 17:59:30 | INFO     | yolox.data.datasets.mot:39 - loading annotations into memory...
2021-10-20 17:59:30 | INFO     | yolox.data.datasets.mot:39 - Done (t=0.04s)
2021-10-20 17:59:30 | INFO     | pycocotools.coco:92 - creating index...
2021-10-20 17:59:30 | INFO     | pycocotools.coco:92 - index created!
2021-10-20 17:59:31 | INFO     | __main__:188 - loading checkpoint
2021-10-20 17:59:32 | INFO     | __main__:193 - loaded checkpoint done.
2021-10-20 17:59:32 | INFO     | __main__:199 -         Fusing model...
C:\Users\RTX3090\.conda\envs\open-mmlab\lib\site-packages\torch\nn\modules\module.py:390: UserWarning: The .grad attribute of a Tensor that is not a
 leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Te
nsor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See git
hub.com/pytorch/pytorch/pull/30531 for more informations.
  if param.grad is not None:
 46%|#################################################2                                                        | 2079/4479 [02:31<03:16, 12.22it/s]2
021-10-20 18:02:13 | INFO     | yolox.evaluators.mot_evaluator:39 - save results to ./YOLOX_outputs\yolox_x_mix_mot20_ch\track_results\MOT20-04.txt
 69%|#########################################################################                                 | 3087/4479 [03:47<01:28, 15.75it/s]2
021-10-20 18:03:24 | INFO     | yolox.evaluators.mot_evaluator:39 - save results to ./YOLOX_outputs\yolox_x_mix_mot20_ch\track_results\MOT20-06.txt
 82%|######################################################################################9                   | 3673/4479 [04:23<00:43, 18.73it/s]2
021-10-20 18:03:58 | INFO     | yolox.evaluators.mot_evaluator:39 - save results to ./YOLOX_outputs\yolox_x_mix_mot20_ch\track_results\MOT20-07.txt
100%|##########################################################################################################| 4479/4479 [05:18<00:00,  2.57it/s]2
021-10-20 18:04:51 | INFO     | yolox.evaluators.mot_evaluator:39 - save results to ./YOLOX_outputs\yolox_x_mix_mot20_ch\track_results\MOT20-08.txt
100%|##########################################################################################################| 4479/4479 [05:18<00:00, 14.07it/s]
2021-10-20 18:04:51 | INFO     | yolox.evaluators.mot_evaluator:630 - Evaluate in main process...
2021-10-20 18:05:05 | INFO     | yolox.evaluators.mot_evaluator:659 - Loading and preparing results...
2021-10-20 18:05:09 | INFO     | yolox.evaluators.mot_evaluator:659 - DONE (t=4.00s)
2021-10-20 18:05:09 | INFO     | pycocotools.coco:433 - Running per image evaluation...
creating index...Evaluate annotation type *bbox*

2021-10-20 18:05:09 | INFO     | pycocotools.coco:433 - index created!
COCOeval_opt.evaluate() finished in 2.42 seconds.
Accumulating evaluation results...
COCOeval_opt.accumulate() finished in 0.18 seconds.
gt_type 
2021-10-20 18:05:12 | gt_files ['datasets/MOT20/train\\MOT20-01\\gt\\gt.txt', 'datasets/MOT20/train\\MOT20-02\\gt\\gt.txt', 'datasets/MOT20/train\\M
OT20-03\\gt\\gt.txt', 'datasets/MOT20/train\\MOT20-05\\gt\\gt.txt']
INFO     | __main__:220 -
Average forward time: 41.49 ms, Average track time: 18.28 ms, Average inference time: 59.77 ms
Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = -1.000
Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = -1.000
Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = -1.000
Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = -1.000
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = -1.000
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = -1.000
Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

2021-10-20 18:05:12 | INFO     | __main__:237 - Found 4 groundtruths and 4 test files.
2021-10-20 18:05:12 | INFO     | __main__:238 - Available LAP solvers ['lap', 'scipy']
2021-10-20 18:05:12 | INFO     | __main__:239 - Default LAP solver 'lap'
2021-10-20 18:05:12 | INFO     | __main__:240 - Loading files.
2021-10-20 18:05:29        Rcll Prcn GT  MT  PT  ML  FP  FN IDs  FM MOTA MOTP num_objects
OVERALL  NaN  NaN  0 NaN NaN NaN NaN NaN NaN NaN  NaN  NaN           0
 |         IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs  FM MOTA MOTP IDt IDa IDm num_objects
OVERALL  NaN NaN NaN  NaN  NaN  0  0  0  0  0  0   0   0  NaN  NaN   0   0   0           0WARNING
 | __main__:123 - No ground truth for MOT20-04, skipping.
2021-10-20 18:05:29 | WARNING  | __main__:123 - No ground truth for MOT20-06, skipping.
2021-10-20 18:05:29 | WARNING  | __main__:123 - No ground truth for MOT20-07, skipping.
2021-10-20 18:05:29 | WARNING  | __main__:123 - No ground truth for MOT20-08, skipping.
2021-10-20 18:05:29 | INFO     | __main__:248 - Running metrics
2021-10-20 18:05:29 | INFO     | __main__:273 - Completed

[Err occured when test]

File "tools/track.py", line 18, in
from yolox.evaluators import MOTEvaluator
ImportError: cannot import name 'MOTEvaluator' from 'yolox.evaluators'

Namespace' object has no attribute 'cache'

When I tried to run the custom dataset, by creating the EXP file like shown in tutorial , this error occur.

File "/mnt/d/PycharmProjects/YOLOX/YOLOX/yolox/core/launch.py", line 98, in launch
main_func(*args)
โ”‚ โ”” (โ•’โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•คโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•...
โ”” <function main at 0x7f4d8504f9d0>

File "tools/train.py", line 100, in main
trainer.train()
โ”‚ โ”” <function Trainer.train at 0x7f4d867fdc10>
โ”” <yolox.core.trainer.Trainer object at 0x7f4d85055e20>

File "/mnt/d/PycharmProjects/YOLOX/YOLOX/yolox/core/trainer.py", line 70, in train
self.before_train()
โ”‚ โ”” <function Trainer.before_train at 0x7f4d85031430>
โ”” <yolox.core.trainer.Trainer object at 0x7f4d85055e20>

File "/mnt/d/PycharmProjects/YOLOX/YOLOX/yolox/core/trainer.py", line 149, in before_train
cache_img=self.args.cache,
โ”‚ โ”” Namespace(batch_size=1, ckpt='pretrained/yolox_s.pth', devices=1, dist_backend='nccl', dist_url=None, exp_file='exps/example/...
โ”” <yolox.core.trainer.Trainer object at 0x7f4d85055e20>

AttributeError: 'Namespace' object has no attribute 'cache'

debug tools?

Thank you for your amazing work. May I ask what debug tool do you use? I use pdb for debugging. Characters appear when pressing each key, which cannot be deleted, which is very inconvenient.

Using ByteTrack on other tracker

Hi @ifzhang ,
Thanks for another nice work. I have a question,
If I want to use ByteTrack for other tracker eg FairMOT, do I need to train the model again, or can I use arleady trained model?
because I have a model that was trained on FairMOT. So my concern is whether I need to retrain again when I want to use ByteTracker.

Thank you

Installation error on rtx 3090 and my solution

Dear author,

When I use RTX 3090, the PyTorch installation will come to an error with 'pip3 install -r requirements.txt'.

3090 is only adapted to the Cuda version above 11.0 while 'pip3 install -r requirements.txt' will bring a torch with Cuda version 10.2. This will cause problems in GPU calls.

Thus if u use RTX 3090, install torch and torchvision from the official website and then install other packages.

I suggest you add this hint to the readme.

Thanks for your excellent work.

mean_state[7] = 0

if self.state != TrackState.Tracked:
mean_state[7] = 0
kalman่ทŸ่ธชไบ†[cx, cy, r, h, vx, vy, vr, vh]่ฟ™ๅ‡ ไธช็ปดๅบฆ๏ผŒไธบไป€ไนˆๅชๆŠŠ mean_state[7] ็ฝฎไธบ0ๅ‘ข๏ผŸ

demo_track error with the custom dataset!!!help

python3 demo_track.py video -f exps/example/custom/yolox_s.py -c YOLOX_outputs/yolox_s/best_ckpt.pth --fp16 --fuse --save_result

2021-10-27 21:24:20.586 | INFO | main:main:298 - Args: Namespace(camid=-1, ckpt='YOLOX_outputs/yolox_s/best_ckpt.pth', conf=None, demo='video', device='gpu', exp_file='exps/example/custom/yolox_s.py', experiment_name='yolox_s', fp16=True, fuse=True, match_thresh=0.8, min_box_area=10, mot20=False, name=None, nms=None, path='/home/xjt/zzh/pitaya_pic/big_6_3_1.MOV', save_result=True, track_buffer=30, track_thresh=0.5, trt=False, tsize=None)

2021-10-27 21:24:21.170 | INFO | main:main:308 - Model Summary: Params: 8.94M, Gflops: 26.64
2021-10-27 21:24:36.146 | INFO | main:main:319 - loading checkpoint
2021-10-27 21:24:37.293 | INFO | main:main:323 - loaded checkpoint done.
2021-10-27 21:24:37.293 | INFO | main:main:326 - Fusing model...

/home/xjt/anaconda3/envs/zzh/lib/python3.8/site-packages/torch/nn/modules/module.py:390: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
if param.grad is not None:
2021-10-27 21:24:38.603 | INFO | main:imageflow_demo:238 - video save_path is ./YOLOX_outputs/yolox_s/track_vis/2021_10_27_21_24_37/big_6_3_1.MOV
2021-10-27 21:24:38.607 | INFO | main:imageflow_demo:248 - Processing frame 0 (100000.00 fps)
Traceback (most recent call last):

File "demo_track.py", line 357, in
main(exp, args)
File "demo_track.py", line 350, in main
imageflow_demo(predictor, vis_folder, current_time, args)
File "demo_track.py", line 257, in imageflow_demo
online_targets = tracker.update(outputs[0], [img_info['height'], img_info['width']], exp.test_size)
File "/home/xjt/zzh/ByteTrack-main/yolox/tracker/byte_tracker.py", line 166, in update
if output_results.shape[1] == 5:
AttributeError: 'NoneType' object has no attribute 'shape'

Nano and Tiny

Do you guys have YoloX nano and Tiny, trained in MOT17 train, CrowdHuman, ETHZ, and Cityperson? I'm currently testing all models to have a performance study between Speed and Accuracy, even with light models.

If not, is possible to use the scripts to train nano and tiny? With the same augmentation and pre-processing steps described in the paper?

demo sample

Hi,

I am confused as to how to use ByteTrack effectively. I would like to use it for video tracking with my own detector providing bounding boxes. But I don't know which tracker (under tutorials) to follow for surveillance use case. Is there a recommended tracker to use?

Also, if I wanted to include byte in my tracker, where would it go? I am currently using https://github.com/tryolabs/norfair for tracking and I'm curious as to how to modify it with byte

Training time is too long

I usepython3 tools/train.py -f exps/example/mot/yolox_x_ablation.py -d 3 -b 8 --fp16 -o -c pretrained/yolox_x.pth in Train ablation model (MOT17 half train and CrowdHuman) ,got too long training time.
My device: RTX 2080 ti x3, Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz x4ใ€‚
Here is my log file:

root@ai:/ai/data/ByteTrack-main# python3 tools/train.py -f exps/example/mot/yolox_x_ablation.py -d 3 -b 8 --fp16 -o -c pretrained/yolox_x.pth
2021-11-02 21:20:19.566 | INFO     | yolox.core.launch:launch_by_subprocess:145 - 
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. 
*****************************************
2021-11-02 21:20:22.017 | INFO     | yolox.core.launch:_distributed_worker:184 - Rank 1 initialization finished.
2021-11-02 21:20:22.022 | INFO     | yolox.core.launch:_distributed_worker:184 - Rank 0 initialization finished.
2021-11-02 21:20:22.027 | INFO     | yolox.core.launch:_distributed_worker:184 - Rank 2 initialization finished.
[W ProcessGroupNCCL.cpp:1569] Rank 0 using best-guess GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
[W ProcessGroupNCCL.cpp:1569] Rank 2 using best-guess GPU 2 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
[W ProcessGroupNCCL.cpp:1569] Rank 1 using best-guess GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
2021-11-02 21:20:31 | INFO     | yolox.core.trainer:124 - args: Namespace(batch_size=8, ckpt='pretrained/yolox_x.pth', devices=3, dist_backend='nccl', dist_url=None, exp_file='exps/example/mot/yolox_x_ablation.py', experiment_name='yolox_x_ablation', fp16=True, local_rank=0, machine_rank=0, name=None, num_machines=1, occupy=True, opts=[], resume=False, start_epoch=None)
2021-11-02 21:20:31 | INFO     | yolox.core.trainer:125 - exp value:
โ•’โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•คโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ••
โ”‚ keys             โ”‚ values             โ”‚
โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ชโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ก
โ”‚ seed             โ”‚ None               โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ output_dir       โ”‚ './YOLOX_outputs'  โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ print_interval   โ”‚ 20                 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ eval_interval    โ”‚ 5                  โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ num_classes      โ”‚ 1                  โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ depth            โ”‚ 1.33               โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ width            โ”‚ 1.25               โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ data_num_workers โ”‚ 0                  โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ input_size       โ”‚ (800, 1440)        โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ random_size      โ”‚ (18, 32)           โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ train_ann        โ”‚ 'train.json'       โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ val_ann          โ”‚ 'val_half.json'    โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ degrees          โ”‚ 10.0               โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ translate        โ”‚ 0.1                โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ scale            โ”‚ (0.1, 2)           โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ mscale           โ”‚ (0.8, 1.6)         โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ shear            โ”‚ 2.0                โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ perspective      โ”‚ 0.0                โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ enable_mixup     โ”‚ True               โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ warmup_epochs    โ”‚ 1                  โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ max_epoch        โ”‚ 80                 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ warmup_lr        โ”‚ 0                  โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ basic_lr_per_img โ”‚ 1.5625e-05         โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ scheduler        โ”‚ 'yoloxwarmcos'     โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ no_aug_epochs    โ”‚ 10                 โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ min_lr_ratio     โ”‚ 0.05               โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ ema              โ”‚ True               โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ weight_decay     โ”‚ 0.0005             โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ momentum         โ”‚ 0.9                โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ exp_name         โ”‚ 'yolox_x_ablation' โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ test_size        โ”‚ (800, 1440)        โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ test_conf        โ”‚ 0.1                โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚ nmsthre          โ”‚ 0.7                โ”‚
โ•˜โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•งโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•›
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
2021-11-02 21:20:33 | INFO     | yolox.core.trainer:131 - Model Summary: Params: 99.00M, Gflops: 791.73
2021-11-02 21:20:33 | INFO     | yolox.core.trainer:289 - loading checkpoint for fine tuning
2021-11-02 21:20:37 | WARNING  | yolox.utils.checkpoint:27 - Shape of head.cls_preds.0.weight in checkpoint is torch.Size([80, 320, 1, 1]), while shape of head.cls_preds.0.weight in model is torch.Size([1, 320, 1, 1]).
2021-11-02 21:20:37 | WARNING  | yolox.utils.checkpoint:27 - Shape of head.cls_preds.0.bias in checkpoint is torch.Size([80]), while shape of head.cls_preds.0.bias in model is torch.Size([1]).
2021-11-02 21:20:37 | WARNING  | yolox.utils.checkpoint:27 - Shape of head.cls_preds.1.weight in checkpoint is torch.Size([80, 320, 1, 1]), while shape of head.cls_preds.1.weight in model is torch.Size([1, 320, 1, 1]).
2021-11-02 21:20:37 | WARNING  | yolox.utils.checkpoint:27 - Shape of head.cls_preds.1.bias in checkpoint is torch.Size([80]), while shape of head.cls_preds.1.bias in model is torch.Size([1]).
2021-11-02 21:20:37 | WARNING  | yolox.utils.checkpoint:27 - Shape of head.cls_preds.2.weight in checkpoint is torch.Size([80, 320, 1, 1]), while shape of head.cls_preds.2.weight in model is torch.Size([1, 320, 1, 1]).
2021-11-02 21:20:37 | WARNING  | yolox.utils.checkpoint:27 - Shape of head.cls_preds.2.bias in checkpoint is torch.Size([80]), while shape of head.cls_preds.2.bias in model is torch.Size([1]).
2021-11-02 21:20:37 | INFO     | yolox.data.datasets.mot:39 - loading annotations into memory...
2021-11-02 21:20:41 | INFO     | yolox.data.datasets.mot:39 - Done (t=4.29s)
2021-11-02 21:20:41 | INFO     | pycocotools.coco:88 - creating index...
2021-11-02 21:20:41 | INFO     | pycocotools.coco:88 - index created!
2021-11-02 21:20:44 | INFO     | yolox.core.trainer:148 - init prefetcher, this might take one minute or less...
2021-11-02 21:20:52 | INFO     | yolox.data.datasets.mot:39 - loading annotations into memory...
2021-11-02 21:20:53 | INFO     | yolox.data.datasets.mot:39 - Done (t=0.26s)
2021-11-02 21:20:53 | INFO     | pycocotools.coco:88 - creating index...
2021-11-02 21:20:53 | INFO     | pycocotools.coco:88 - index created!
2021-11-02 21:20:53 | INFO     | yolox.core.trainer:176 - Training start...
2021-11-02 21:20:53 | INFO     | yolox.core.trainer:187 - ---> start train epoch1
2021-11-02 21:21:25 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 20/3672, mem: 9013Mb, iter_time: 1.614s, data_time: 0.850s, total_loss: 9.063, iou_loss: 3.039, l1_loss: 0.000, conf_loss: 3.430, cls_loss: 2.594, lr: 3.708e-09, size: 640, ETA: 5 days, 11:39:28
2021-11-02 21:22:12 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 40/3672, mem: 9600Mb, iter_time: 2.322s, data_time: 1.259s, total_loss: 7.511, iou_loss: 2.473, l1_loss: 0.000, conf_loss: 2.175, cls_loss: 2.863, lr: 1.483e-08, size: 1024, ETA: 6 days, 16:31:56
2021-11-02 21:22:49 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 60/3672, mem: 9600Mb, iter_time: 1.886s, data_time: 0.978s, total_loss: 8.964, iou_loss: 2.844, l1_loss: 0.000, conf_loss: 3.534, cls_loss: 2.586, lr: 3.337e-08, size: 672, ETA: 6 days, 14:18:41
2021-11-02 21:23:23 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 80/3672, mem: 9600Mb, iter_time: 1.657s, data_time: 0.903s, total_loss: 9.546, iou_loss: 3.083, l1_loss: 0.000, conf_loss: 3.739, cls_loss: 2.723, lr: 5.933e-08, size: 960, ETA: 6 days, 8:30:58
2021-11-02 21:24:01 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 100/3672, mem: 9600Mb, iter_time: 1.923s, data_time: 1.024s, total_loss: 7.776, iou_loss: 3.002, l1_loss: 0.000, conf_loss: 2.293, cls_loss: 2.481, lr: 9.271e-08, size: 960, ETA: 6 days, 9:23:06
2021-11-02 21:24:40 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 120/3672, mem: 9600Mb, iter_time: 1.922s, data_time: 1.029s, total_loss: 9.406, iou_loss: 3.129, l1_loss: 0.000, conf_loss: 4.003, cls_loss: 2.274, lr: 1.335e-07, size: 960, ETA: 6 days, 9:56:22
2021-11-02 21:25:17 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 140/3672, mem: 9600Mb, iter_time: 1.860s, data_time: 1.005s, total_loss: 7.926, iou_loss: 3.104, l1_loss: 0.000, conf_loss: 2.596, cls_loss: 2.226, lr: 1.817e-07, size: 736, ETA: 6 days, 9:36:51
2021-11-02 21:25:53 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 160/3672, mem: 9600Mb, iter_time: 1.828s, data_time: 1.028s, total_loss: 8.205, iou_loss: 2.912, l1_loss: 0.000, conf_loss: 2.858, cls_loss: 2.434, lr: 2.373e-07, size: 832, ETA: 6 days, 9:02:10
2021-11-02 21:26:34 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 180/3672, mem: 9600Mb, iter_time: 2.031s, data_time: 1.078s, total_loss: 9.311, iou_loss: 3.174, l1_loss: 0.000, conf_loss: 4.104, cls_loss: 2.033, lr: 3.004e-07, size: 1024, ETA: 6 days, 10:25:29
2021-11-02 21:27:06 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 200/3672, mem: 9600Mb, iter_time: 1.585s, data_time: 0.873s, total_loss: 7.813, iou_loss: 2.607, l1_loss: 0.000, conf_loss: 2.924, cls_loss: 2.281, lr: 3.708e-07, size: 768, ETA: 6 days, 7:53:54
2021-11-02 21:27:40 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 220/3672, mem: 9600Mb, iter_time: 1.686s, data_time: 1.048s, total_loss: 8.038, iou_loss: 3.370, l1_loss: 0.000, conf_loss: 2.905, cls_loss: 1.762, lr: 4.487e-07, size: 800, ETA: 6 days, 6:34:33
2021-11-02 21:28:15 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 240/3672, mem: 9600Mb, iter_time: 1.755s, data_time: 1.005s, total_loss: 7.694, iou_loss: 2.848, l1_loss: 0.000, conf_loss: 3.109, cls_loss: 1.737, lr: 5.340e-07, size: 736, ETA: 6 days, 5:56:45
2021-11-02 21:28:48 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 260/3672, mem: 9600Mb, iter_time: 1.657s, data_time: 0.970s, total_loss: 7.517, iou_loss: 2.833, l1_loss: 0.000, conf_loss: 3.048, cls_loss: 1.636, lr: 6.267e-07, size: 768, ETA: 6 days, 4:47:29
2021-11-02 21:29:16 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 280/3672, mem: 9600Mb, iter_time: 1.421s, data_time: 0.888s, total_loss: 7.343, iou_loss: 2.724, l1_loss: 0.000, conf_loss: 2.876, cls_loss: 1.743, lr: 7.268e-07, size: 768, ETA: 6 days, 2:25:49
2021-11-02 21:29:54 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 300/3672, mem: 9600Mb, iter_time: 1.881s, data_time: 1.031s, total_loss: 7.612, iou_loss: 3.020, l1_loss: 0.000, conf_loss: 3.245, cls_loss: 1.348, lr: 8.343e-07, size: 736, ETA: 6 days, 2:52:48
2021-11-02 21:30:34 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 320/3672, mem: 9600Mb, iter_time: 1.975s, data_time: 1.074s, total_loss: 6.557, iou_loss: 2.583, l1_loss: 0.000, conf_loss: 2.789, cls_loss: 1.186, lr: 9.493e-07, size: 1024, ETA: 6 days, 3:45:15
2021-11-02 21:31:11 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 340/3672, mem: 9600Mb, iter_time: 1.876s, data_time: 1.092s, total_loss: 6.434, iou_loss: 2.880, l1_loss: 0.000, conf_loss: 2.601, cls_loss: 0.952, lr: 1.072e-06, size: 992, ETA: 6 days, 4:02:43
2021-11-02 21:31:47 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 360/3672, mem: 9600Mb, iter_time: 1.785s, data_time: 1.044s, total_loss: 7.719, iou_loss: 3.052, l1_loss: 0.000, conf_loss: 3.379, cls_loss: 1.288, lr: 1.201e-06, size: 960, ETA: 6 days, 3:53:39
2021-11-02 21:32:22 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 380/3672, mem: 9600Mb, iter_time: 1.768s, data_time: 0.969s, total_loss: 8.004, iou_loss: 3.042, l1_loss: 0.000, conf_loss: 4.052, cls_loss: 0.910, lr: 1.339e-06, size: 576, ETA: 6 days, 3:41:08
2021-11-02 21:32:58 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 400/3672, mem: 9600Mb, iter_time: 1.778s, data_time: 1.103s, total_loss: 6.296, iou_loss: 2.513, l1_loss: 0.000, conf_loss: 2.812, cls_loss: 0.971, lr: 1.483e-06, size: 736, ETA: 6 days, 3:32:10
2021-11-02 21:33:34 | INFO     | yolox.core.trainer:250 - epoch: 1/80, iter: 420/3672, mem: 9600Mb, iter_time: 1.829s, data_time: 1.039s, total_loss: 7.636, iou_loss: 3.079, l1_loss: 0.000, conf_loss: 3.628, cls_loss: 0.929, lr: 1.635e-06, size: 960, ETA: 6 days, 3:35:56

Pipeline is very slow

check my log.txt file attached , got 0.9 FPS , 11 seconds demo video (nearly 300 frames) may take around 5-7 minutes to fininsh
2021-11-01 14:01:50.434 | INFO | main:main:290 - Args: Namespace(camid=0, ckpt='pretrained/bytetrack_x_mot17.pth.tar', conf=None, demo='video', device='gpu', exp_file='exps/example/mot/yolox_x_mix_det.py', experiment_name='yolox_x_mix_det', fp16=True, fuse=True, match_thresh=0.8, min_box_area=10, mot20=False, name=None, nms=None, path='./videos/palace.mp4', save_result=True, track_buffer=30, track_thresh=0.5, trt=False, tsize=None)
/home/mossad/projects/ByteTrack/venv/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
2021-11-01 14:01:51.169 | INFO | main:main:300 - Model Summary: Params: 99.00M, Gflops: 791.73
2021-11-01 14:01:53.548 | INFO | main:main:311 - loading checkpoint
2021-11-01 14:02:00.999 | INFO | main:main:315 - loaded checkpoint done.
2021-11-01 14:02:00.999 | INFO | main:main:318 - Fusing model...
/home/mossad/projects/ByteTrack/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:561: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more information.
if param.grad is not None:
2021-11-01 14:02:01.587 | INFO | main:imageflow_demo:236 - video save_path is ./YOLOX_outputs/yolox_x_mix_det/track_vis/2021_11_01_14_02_01/palace.mp4
2021-11-01 14:02:01.589 | INFO | main:imageflow_demo:246 - Processing frame 0 (100000.00 fps)
2021-11-01 14:02:24.684 | INFO | main:imageflow_demo:246 - Processing frame 20 (0.92 fps)
2021-11-01 14:02:47.751 | INFO | main:imageflow_demo:246 - Processing frame 40 (0.92 fps)
2021-11-01 14:03:11.332 | INFO | main:imageflow_demo:246 - Processing frame 60 (0.91 fps)
2021-11-01 14:03:35.619 | INFO | main:imageflow_demo:246 - Processing frame 80 (0.90 fps)
2021-11-01 14:03:58.876 | INFO | main:imageflow_demo:246 - Processing frame 100 (0.90 fps)
2021-11-01 14:04:21.899 | INFO | main:imageflow_demo:246 - Processing frame 120 (0.90 fps)
2021-11-01 14:04:44.870 | INFO | main:imageflow_demo:246 - Processing frame 140 (0.91 fps)

multi-class track

Can you provide multi-class multi-object tracking method?thank you

Question about MOT17 dataset

Dear Author:

As you have mentioned, you used full mot17 for training the mot17 mix_det

But I found two issues
(1)

if 'mot' in DATA_PATH and (split != 'test' and not ('FRCNN' in seq)):

if split = 'train', and not FRCNN in seq, then the folder would not be included inside training set, (the question is: why SDP and DPM are not used for training)

(2)

mot_json = json.load(open('datasets/mot/annotations/train_half.json','r'))

only use half train for mix_det (not consistent as you mentioned using full mot17.)

Thanks for your answer

Asking for improvement of Tracklet interpolation method.

ๅธˆๅ…„๏ผŒๆ‚จๅฅฝ๏ผš
็œ‹ๅˆฐๆ‚จๅฐ†MOT ็”จๆœ€็ฎ€ๅ•็š„matchingๆ–นๅผๆๅ‡ๅˆฐ80+๏ผŒ็œŸๆ˜ฏไธ€ไปถไปคไบบๅ…ดๅฅ‹็š„ไบ‹ๆƒ…๏ผŒๆ‰€ไปฅๆœ‰ไธชๅ…ณไบŽๆ–‡็ซ ็š„็‚นๆƒณ่ฏทๆ•™ๆ‚จใ€‚
ๅ…ณไบŽTracklet Interpolationๆ‰€ๅธฆๆฅ็š„ๆๅ‡ใ€‚ๅ› ไธบ่ฟ™็ง็ฆป็บฟ็š„matchingๆ–นๅผ็กฎๅฎžๅฏไปฅๅธฎๅŠฉๆๅ‡MOTAๅ’ŒIDF1๏ผŒไฝ†ๆ˜ฏๅฆ‚ๆžœ้ขๅฏนๅœจ็บฟ็š„่ทŸ่ธชๆ–นๅผ๏ผŒๆ˜ฏๅฆ่ฟ˜ๅฏไปฅไฝฟ็”จ่ฟ™ไธชๆ–นๆณ•๏ผ›ๅฆ‚ๆžœๆฒกๆœ‰่ฟ™็ง็ฆป็บฟๆ–นๅผ็š„็ญ–็•ฅ๏ผŒๆ˜ฏๅฆ่ฟ˜ไผš็›ธ่พƒๅ…ถไป–trackerๆœ‰ๅฆ‚ๆญคๅคง็š„ๆ้ซ˜ใ€‚

Weird paths after mix_det

Hello Guys, nice work btw.

I'm training a tiny and nano version, using the mixed data.
I got a lot of errors, after debugging I realize that paths from Cityscapes were cropped at some point and ETHZ was added to a folder called ets.

img = cv2.imread(img_file)

After printing the img_file i got the following paths:

  • ByteTrack/datasets/mix_det/cp_train/tyscapes/images/train/hamburg/hamburg_000000_027304_leftImg8bit.png
  • ByteTrack/datasets/mix_det/ethz_train/ets/ETHZ/eth01/images/image_00000908_0.png

Yes, I did all the steps described in the mix_xx.py file (mkdir, ln -s...), everything ran like a charm, except those weird paths.

After creating those folders I was able to train but didn't find where the path is written.

DeepSORT combination

DeepSORT combination is mentioned in the paper, however I could not find the implementation. Can please provide it? Thanks, awesome work

TensorRT Converting Model

Hello

Thanks you for sharing this work, it is truly amazing!

I'm having some trouble covering a model to TensorRT.
I have torch2trt and TensorRT installed, but when I run python3 tools/trt.py -f exps/example/mot/yolox_s_mix_det.py -c pretrained/bytetrack_s_mot17.pth.tar I get the following error:

Warning: Encountered known unsupported method torch.nn.functional.silu
Warning: Encountered known unsupported method torch.nn.functional.has_torch_function_unary
Warning: Encountered known unsupported method torch.nn.functional.silu
Warning: Encountered known unsupported method torch.Tensor.sigmoid
Warning: Encountered known unsupported method torch.Tensor.sigmoid
[TensorRT] ERROR: (Unnamed Layer* 163) [Concatenation]: all concat input tensors must have the same number of dimensions. Input 0 shape: [4,76,136]. Input 1 shape: [1,1,76,136].
2021-10-26 19:03:13.416 | ERROR    | __main__:<module>:74 - An error has been caught in function '<module>', process 'MainProcess' (60595), thread 'MainThread' (139956715054912):
Traceback (most recent call last):

> File "tools/trt.py", line 74, in <module>
    main()
    โ”” <function main at 0x7f49029db1f0>

  File "tools/trt.py", line 54, in main
    model_trt = torch2trt(
                โ”” <function torch2trt at 0x7f49129e4ee0>

  File "/usr/local/lib/python3.8/dist-packages/torch2trt-0.1.0-py3.8.egg/torch2trt/torch2trt.py", line 436, in torch2trt

  File "/home/alberto/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
           โ”‚             โ”‚        โ”” {}
           โ”‚             โ”” (tensor([[[[1., 1., 1.,  ..., 1., 1., 1.],
           โ”‚                         [1., 1., 1.,  ..., 1., 1., 1.],
           โ”‚                         [1., 1., 1.,  ..., 1., 1., 1.]...
           โ”” <bound method YOLOX.forward of YOLOX(
               (backbone): YOLOPAFPN(
                 (backbone): CSPDarknet(
                   (stem): Focus(
                     (conv...

  File "/home/alberto/Desktop/repos/public_repos/ByteTrack/tools/yolox/models/yolox.py", line 46, in forward
    outputs = self.head(fpn_outs)
              โ”‚         โ”” (tensor([[[[-2.6430e-01, -2.1340e-01, -2.3012e-01,  ..., -2.3089e-01,
              โ”‚                      -1.6204e-01, -1.7283e-01],
              โ”‚                     [-2.433...
              โ”” YOLOX(
                  (backbone): YOLOPAFPN(
                    (backbone): CSPDarknet(
                      (stem): Focus(
                        (conv): BaseConv(
                          (conv): ...

  File "/home/alberto/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
           โ”‚             โ”‚        โ”” {}
           โ”‚             โ”” ((tensor([[[[-2.6430e-01, -2.1340e-01, -2.3012e-01,  ..., -2.3089e-01,
           โ”‚                          -1.6204e-01, -1.7283e-01],
           โ”‚                         [-2.43...
           โ”” <bound method YOLOXHead.forward of YOLOXHead(
               (cls_convs): ModuleList(
                 (0): Sequential(
                   (0): BaseConv(
                     (c...

  File "/home/alberto/Desktop/repos/public_repos/ByteTrack/tools/yolox/models/yolo_head.py", line 155, in forward
    x = self.stems[k](x)
        โ”‚          โ”‚  โ”” tensor([[[[ 0.0098, -0.2402, -0.1447,  ..., -0.2738, -0.1549,  0.0878],
        โ”‚          โ”‚              [-0.2671, -0.2767, -0.2177,  ..., -0.2766, ...
        โ”‚          โ”” 1
        โ”” YOLOXHead(
            (cls_convs): ModuleList(
              (0): Sequential(
                (0): BaseConv(
                  (conv): Conv2d(128, 128, kernel_size=...

  File "/home/alberto/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
           โ”‚             โ”‚        โ”” {}
           โ”‚             โ”” (tensor([[[[ 0.0098, -0.2402, -0.1447,  ..., -0.2738, -0.1549,  0.0878],
           โ”‚                         [-0.2671, -0.2767, -0.2177,  ..., -0.2766,...
           โ”” <bound method BaseConv.forward of BaseConv(
               (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
               (bn):...

  File "/home/alberto/Desktop/repos/public_repos/ByteTrack/tools/yolox/models/network_blocks.py", line 51, in forward
    return self.act(self.bn(self.conv(x)))
           โ”‚        โ”‚       โ”‚         โ”” tensor([[[[ 0.0098, -0.2402, -0.1447,  ..., -0.2738, -0.1549,  0.0878],
           โ”‚        โ”‚       โ”‚                     [-0.2671, -0.2767, -0.2177,  ..., -0.2766, ...
           โ”‚        โ”‚       โ”” BaseConv(
           โ”‚        โ”‚           (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
           โ”‚        โ”‚           (bn): BatchNorm2d(128, eps=0.001, momen...
           โ”‚        โ”” BaseConv(
           โ”‚            (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
           โ”‚            (bn): BatchNorm2d(128, eps=0.001, momen...
           โ”” BaseConv(
               (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
               (bn): BatchNorm2d(128, eps=0.001, momen...

  File "/home/alberto/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
           โ”‚             โ”‚        โ”” {}
           โ”‚             โ”” (tensor([[[[ 0.7442,  0.7610,  0.4806,  ...,  0.4861,  0.5323,  0.7519],
           โ”‚                         [ 0.4981,  0.1419,  0.0905,  ...,  0.0628,...
           โ”” <bound method _BatchNorm.forward of BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True)>
  File "/home/alberto/.local/lib/python3.8/site-packages/torch/nn/modules/batchnorm.py", line 168, in forward
    return F.batch_norm(
           โ”‚ โ”” <function batch_norm at 0x7f4919a4aca0>
           โ”” <module 'torch.nn.functional' from '/home/alberto/.local/lib/python3.8/site-packages/torch/nn/functional.py'>

  File "/usr/local/lib/python3.8/dist-packages/torch2trt-0.1.0-py3.8.egg/torch2trt/torch2trt.py", line 218, in wrapper

  File "/usr/local/lib/python3.8/dist-packages/torch2trt-0.1.0-py3.8.egg/torch2trt/converters/batch_norm.py", line 15, in convert_batch_norm_trt7

  File "/usr/local/lib/python3.8/dist-packages/torch2trt-0.1.0-py3.8.egg/torch2trt/torch2trt.py", line 131, in trt_

ValueError: __len__() should return >= 0

I think the main part is the shape mismatch, [TensorRT] ERROR: (Unnamed Layer* 163) [Concatenation]: all concat input tensors must have the same number of dimensions. Input 0 shape: [4,76,136]. Input 1 shape: [1,1,76,136].

I've installed tortch2trt from their github repo and TensorRT through python3 -m pip install nvidia-tensorrt==7.2.2.1t. I'm also running CUDA 11.2. I'm not sure if those might be causing issues.

UPDATE:

After updating to the latest CUDA version I was able to run the docker container and covert the model successfully within the container

colab

please add a google colab for inference

Large dataset format

Does this support datasets where all annotations can't fit into memory (like very large COCO annotation files)? For example PASCAL VOC, GluonCVMotion format, webdataset?

Questions on training custom dataset

Thanks for your great work!

There are two questions after I trained a model on my dataset.

  1. Did you try focal loss. If I want to use focal loss, how can I modify the source code.
  2. Objects on my dataset are quite small. The result is not satisfactory. How can I improve the track results on small objects.

Implemention details

Hey there. Thanks for your great work!!!
Here are a few details about using this algorithm:

  1. Can I just use the class BYTETrackor() behind my own detection result?
  2. If 1 is correct, how can I use interpolation method in that way?
  3. What is the meaning of '--min-box-area', I did't find it in the BYTETrackor()

The link to the checkpoint

Dear Author,

Thanks for your work.

May I know if the google link of the bytetrack_s_mot17(checkpoint) is okay?

Best Regards,

Craig

Which is Best checkpoint?

Thanks for your amazing work!
I'm using your tracker to process my surveillance video, however, you offered so many checkpoint. So I have no idea to use which one. Could you please give a hint?

่ฏท้—ฎ่ฟ™ไธช้”™่ฏฏๆ˜ฏไป€ไนˆๆ„ๆ€๏ผŸ

non-network local connections being added to access control list
unknown flag: --gpus
See 'docker run --help'.
่ฟ่กŒไธ‹้ข็š„ๅ‘ฝไปคๅŽ๏ผŒๅ‡บ็ŽฐไธŠ้ข็š„้”™่ฏฏใ€‚
mkdir -p pretrained && \

mkdir -p YOLOX_outputs &&
xhost +local: &&
docker run --gpus all -it --rm
-v $PWD/pretrained:/workspace/ByteTrack/pretrained
-v $PWD/datasets:/workspace/ByteTrack/datasets
-v $PWD/YOLOX_outputs:/workspace/ByteTrack/YOLOX_outputs
-v /tmp/.X11-unix/:/tmp/.X11-unix:rw
--device /dev/video0:/dev/video0:mwr
--net=host
-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR
-e DISPLAY=$DISPLAY
--privileged
bytetrack:latest

non-network local connections being added to access control list
unknown flag: --gpus
See 'docker run --help'.

Error when running demo_track.py - does not work on CPU

I have created a VM with Linux Ubuntu and installed all dependencies.
I guess the following error is due to having no GPU. It seems the code does not work correctly for device='cpu'.
Do you know how to fix your code to support running pretrained models with tracking part on CPU only?

Thanks for your support!

(MyEnv)root@mv:/home/mv/ByteTrack# python3 tools/demo_track.py video -f exps/example/mot/yolox_x_mix_det.py -c pretrained/bytetrack_x_mot17.pth.tar --fp16 --fuse --save_result --device=cpu

Matplotlib is building the font cache; this may take a moment.

2021-11-04 | INFO     | __main__:main:298 - Args: Namespace(camid=0, ckpt='pretrained/bytetrack_x_mot17.pth.tar', conf=None, demo='video', device='cpu', exp_file='exps/example/mot/yolox_x_mix_det.py', experiment_name='yolox_x_mix_det', fp16=True, fuse=True, match_thresh=0.8, min_box_area=10, mot20=False, name=None, nms=None, path='./videos/palace.mp4', save_result=True, track_buffer=30, track_thresh=0.5, trt=False, tsize=None)

[W NNPACK.cpp:79] Could not initialize NNPACK! Reason: Unsupported hardware.

/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /opt/conda/conda-bld/pytorch_1623448265233/work/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)

2021-11-04 | INFO     | __main__:main:308 - Model Summary: Params: 99.00M, Gflops: 791.73
2021-11-04 | INFO     | __main__:main:319 - loading checkpoint
2021-11-04 | INFO     | __main__:main:323 - loaded checkpoint done.
2021-11-04 | INFO     | __main__:main:326 - 	Fusing model...

/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/module.py:561: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more information.
  if param.grad is not None:

2021-11-04 | INFO     | __main__:imageflow_demo:240 - video save_path is ./YOLOX_outputs/yolox_x_mix_det/track_vis/2021_11_04/palace.mp4
2021-11-04 | INFO     | __main__:imageflow_demo:250 - Processing frame 0 (100000.00 fps)

Traceback (most recent call last):
  File "tools/demo_track.py", line 357, in <module>
    main(exp, args)
  File "tools/demo_track.py", line 350, in main
    imageflow_demo(predictor, vis_folder, current_time, args)
  File "tools/demo_track.py", line 253, in imageflow_demo
    outputs, img_info = predictor.inference(frame, timer)
  File "tools/demo_track.py", line 166, in inference
    outputs = self.model(img)
  File "/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/mv/ByteTrack/yolox/models/yolox.py", line 30, in forward
    fpn_outs = self.backbone(x)
  File "/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/mv/ByteTrack/yolox/models/yolo_pafpn.py", line 93, in forward
    out_features = self.backbone(input)
  File "/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/mv/ByteTrack/yolox/models/darknet.py", line 169, in forward
    x = self.stem(x)
  File "/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/mv/ByteTrack/yolox/models/network_blocks.py", line 210, in forward
    return self.conv(x)
  File "/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/mv/ByteTrack/yolox/models/network_blocks.py", line 54, in fuseforward
    return self.act(self.conv(x))
  File "/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 443, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/root/anaconda3/envs/MyEnv/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 440, in _conv_forward
    self.padding, self.dilation, self.groups)

RuntimeError: Input type (torch.FloatTensor) and weight type (torch.HalfTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

there was an error when I followed the steps to convert mot20 dataset to COCO๏ผŸ

Hello, there was an error when I followed the steps to convert mot20 dataset to COCO๏ผš python3 tools/convert_mot20_to_coco.py

MOT20-01: 429 images
429 ann images
55 78
MOT20-02: 2782 images
2782 ann images
206 295
MOT20-03: 2405 images
2405 ann images
515 715
MOT20-05: 3315 images
3315 ann images
1241 1211
loaded train_half for 4468 images and 519477 samples
MOT20-01: 429 images
429 ann images
70 78
MOT20-02: 2782 images
2782 ann images
244 189
MOT20-03: 2405 images
2405 ann images
761 733
MOT20-05: 3315 images
3315 ann images
1418 1209
loaded val_half for 4463 images and 615137 samples
MOT20-01: 429 images
429 ann images
74 78
MOT20-02: 2782 images
2782 ann images
344 295
MOT20-03: 2405 images
2405 ann images
1046 733
MOT20-05: 3315 images
3315 ann images
2215 1211
loaded train for 8931 images and 1134614 samples
MOT20-04: 2080 images
0 -1

Traceback (most recent call last):
  File "tools/convert_mot20_to_coco.py", line 56, in <module>
    height, width = img.shape[:2]
AttributeError: 'NoneType' object has no attribute 'shape'

ๆŽจ็†้€Ÿๅบฆๆ…ข

ๆ‚จๅฅฝ๏ผŒๆˆ‘็”จๆ‚จ็š„ไปฃ็ ๅŽป่ท‘็š„ๆ—ถๅ€™๏ผŒๅ‘็ŽฐๆŽจ็†้€Ÿๅบฆๅนถๆฒกๆœ‰ๆ‚จ่ฏดๅพ—้‚ฃไนˆๅฟซ๏ผŒyoloxๆŽจ็†ไธ€ๅผ ๅ›พ็‰‡้œ€่ฆ100msๅทฆๅณ๏ผŒ่ฏท้—ฎๆ˜ฏไป€ไนˆๅŽŸๅ› ๅ‘ข

setup.py develop fails

Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/2] c++ -MMD -MF /home/ubuntu/SOUTHCOM/ByteTrack/build/temp.linux-x86_64-3.8/home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/vision.o.d -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc -I/usr/lib/python3/dist-packages/torch/include -I/usr/lib/python3/dist-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3/dist-packages/torch/include/TH -I/usr/lib/python3/dist-packages/torch/include/THC -I/usr/include/python3.8 -c -c /home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/vision.cpp -o /home/ubuntu/SOUTHCOM/ByteTrack/build/temp.linux-x86_64-3.8/home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/vision.o -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
FAILED: /home/ubuntu/SOUTHCOM/ByteTrack/build/temp.linux-x86_64-3.8/home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/vision.o 
c++ -MMD -MF /home/ubuntu/SOUTHCOM/ByteTrack/build/temp.linux-x86_64-3.8/home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/vision.o.d -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc -I/usr/lib/python3/dist-packages/torch/include -I/usr/lib/python3/dist-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3/dist-packages/torch/include/TH -I/usr/lib/python3/dist-packages/torch/include/THC -I/usr/include/python3.8 -c -c /home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/vision.cpp -o /home/ubuntu/SOUTHCOM/ByteTrack/build/temp.linux-x86_64-3.8/home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/vision.o -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
In file included from /home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/vision.cpp:1:
/home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/cocoeval/cocoeval.h:4:10: fatal error: pybind11/numpy.h: No such file or directory
    4 | #include <pybind11/numpy.h>
      |          ^~~~~~~~~~~~~~~~~~
compilation terminated.
[2/2] c++ -MMD -MF /home/ubuntu/SOUTHCOM/ByteTrack/build/temp.linux-x86_64-3.8/home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/cocoeval/cocoeval.o.d -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc -I/usr/lib/python3/dist-packages/torch/include -I/usr/lib/python3/dist-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3/dist-packages/torch/include/TH -I/usr/lib/python3/dist-packages/torch/include/THC -I/usr/include/python3.8 -c -c /home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/cocoeval/cocoeval.cpp -o /home/ubuntu/SOUTHCOM/ByteTrack/build/temp.linux-x86_64-3.8/home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/cocoeval/cocoeval.o -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
FAILED: /home/ubuntu/SOUTHCOM/ByteTrack/build/temp.linux-x86_64-3.8/home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/cocoeval/cocoeval.o 
c++ -MMD -MF /home/ubuntu/SOUTHCOM/ByteTrack/build/temp.linux-x86_64-3.8/home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/cocoeval/cocoeval.o.d -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc -I/usr/lib/python3/dist-packages/torch/include -I/usr/lib/python3/dist-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3/dist-packages/torch/include/TH -I/usr/lib/python3/dist-packages/torch/include/THC -I/usr/include/python3.8 -c -c /home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/cocoeval/cocoeval.cpp -o /home/ubuntu/SOUTHCOM/ByteTrack/build/temp.linux-x86_64-3.8/home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/cocoeval/cocoeval.o -O3 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1013"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++14
In file included from /home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/cocoeval/cocoeval.cpp:2:
/home/ubuntu/SOUTHCOM/ByteTrack/yolox/layers/csrc/cocoeval/cocoeval.h:4:10: fatal error: pybind11/numpy.h: No such file or directory
    4 | #include <pybind11/numpy.h>
      |          ^~~~~~~~~~~~~~~~~~
compilation terminated.
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/torch/utils/cpp_extension.py", line 1667, in _run_ninja_build
    subprocess.run(
  File "/usr/lib/python3.8/subprocess.py", line 512, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "setup.py", line 54, in <module>
    setuptools.setup(
  File "/usr/local/lib/python3.8/dist-packages/setuptools/__init__.py", line 153, in setup
    return distutils.core.setup(**attrs)
  File "/usr/lib/python3.8/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/usr/lib/python3.8/distutils/dist.py", line 966, in run_commands
    self.run_command(cmd)
  File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/usr/local/lib/python3.8/dist-packages/setuptools/command/develop.py", line 34, in run
    self.install_for_development()
  File "/usr/local/lib/python3.8/dist-packages/setuptools/command/develop.py", line 136, in install_for_development
    self.run_command('build_ext')
  File "/usr/lib/python3.8/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/usr/local/lib/python3.8/dist-packages/setuptools/command/build_ext.py", line 79, in run
    _build_ext.run(self)
  File "/usr/lib/python3/dist-packages/Cython/Distutils/old_build_ext.py", line 186, in run
    _build_ext.build_ext.run(self)
  File "/usr/lib/python3.8/distutils/command/build_ext.py", line 340, in run
    self.build_extensions()
  File "/usr/lib/python3/dist-packages/torch/utils/cpp_extension.py", line 708, in build_extensions
    build_ext.build_extensions(self)
  File "/usr/lib/python3/dist-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions
    _build_ext.build_ext.build_extensions(self)
  File "/usr/lib/python3.8/distutils/command/build_ext.py", line 449, in build_extensions
    self._build_extensions_serial()
  File "/usr/lib/python3.8/distutils/command/build_ext.py", line 474, in _build_extensions_serial
    self.build_extension(ext)
  File "/usr/local/lib/python3.8/dist-packages/setuptools/command/build_ext.py", line 196, in build_extension
    _build_ext.build_extension(self, ext)
  File "/usr/lib/python3.8/distutils/command/build_ext.py", line 528, in build_extension
    objects = self.compiler.compile(sources,
  File "/usr/lib/python3/dist-packages/torch/utils/cpp_extension.py", line 529, in unix_wrap_ninja_compile
    _write_ninja_file_and_compile_objects(
  File "/usr/lib/python3/dist-packages/torch/utils/cpp_extension.py", line 1354, in _write_ninja_file_and_compile_objects
    _run_ninja_build(
  File "/usr/lib/python3/dist-packages/torch/utils/cpp_extension.py", line 1683, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension


ls /home/ubuntu/.local/lib/python3.8/site-packages/pybind11/include/pybind11/
attr.h         cast.h         common.h       detail/        embed.h        functional.h   iostream.h     operators.h    pybind11.h     stl/           stl.h          
buffer_info.h  chrono.h       complex.h      eigen.h        eval.h         gil.h          numpy.h        options.h      pytypes.h      stl_bind.h    

It exists so I'm not sure what the problem is besides PATH

Why do training time increase over the time?

I started with 30 minutes per epoch and now it's using 8 hours/epoch at 71/80, it's related to cache or something?

The machine I'm using is exclusive for the training, no other processes are running.

The config is 416x416 Yolox-Tiny with 80 epochs 554 Iter FP16 and Tesla T4

image

image

[Feature Request] Expanded TensorBoard logging

Hi, I'd like to request additional TensorBoard logging if/when you happen to have time:

  • overall loss and components
  • gpu memory
  • sample of training images with annotations
  • sample of validation images with annotations
  • hparams
  • PR curve

An error has been caught in function 'launch'

Hello, when I run "python3 tools/train.py -f exps/example/mot/yolox_x_ablation.py -d 1 -b 2 --fp16 -o -c pretrained/yolox_x.pth", it has the following error, how should I solve it๏ผŸ

2021-11-05 17:35:18 | INFO | yolox.core.trainer:131 - Model Summary: Params: 99.00M, Gflops: 791.73
2021-11-05 17:35:19 | ERROR | yolox.core.launch:90 - An error has been caught in function 'launch', process 'MainProcess' (326996), thread 'MainThread' (140436361090880):
Traceback (most recent call last):

File "tools/train.py", line 121, in
args=(exp, args),
โ”‚ โ”” Namespace(batch_size=2, ckpt='pretrained/yolox_x.pth', devices=1, dist_backend='nccl', dist_url=None, exp_file='exps/example/...
โ”” โ•’โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•คโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•...

File "/media//ByteTrack/yolox/core/launch.py", line 90, in launch
main_func(*args)
โ”‚ โ”” (โ•’โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•คโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•...
โ”” <function main at 0x7fb950edf560>

File "tools/train.py", line 100, in main
trainer.train()
โ”‚ โ”” <function Trainer.train at 0x7fb950edf5f0>
โ”” <yolox.core.trainer.Trainer object at 0x7fb950edacd0>

File "/media/ByteTrack/yolox/core/trainer.py", line 70, in train
self.before_train()
โ”‚ โ”” <function Trainer.before_train at 0x7fb9506267a0>
โ”” <yolox.core.trainer.Trainer object at 0x7fb950edacd0>

File "/media/ByteTrack/yolox/core/trainer.py", line 133, in before_train
model.to(self.device)
โ”‚ โ”‚ โ”‚ โ”” 'cuda:0'
โ”‚ โ”‚ โ”” <yolox.core.trainer.Trainer object at 0x7fb950edacd0>
โ”‚ โ”” <function Module.to at 0x7fb95ec67c20>
โ”” YOLOX(
(backbone): YOLOPAFPN(
(backbone): CSPDarknet(
(stem): Focus(
(conv): BaseConv(
(conv): ...

Use my own detection model

Hi
Thanks for sharing this project.
If I want to use my own detection model with your tracker, what/where is the main entry point for me to adapt your code in order to replace the yolox detection model?
Do I need retrain everything or can I inject the detections into your pretrained model for inference?
Thanks

RuntimeError: /onnxruntime_src/onnxruntime/core/platform/posix/env.cc:142

Traceback (most recent call last):
File "deploy/ONNXRuntime/onnx_inference.py", line 159, in
predictor = Predictor(args)
File "deploy/ONNXRuntime/onnx_inference.py", line 79, in init
self.session = onnxruntime.InferenceSession(args.model)
File "/home/user/.local/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 283, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/home/user/.local/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 310, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
RuntimeError: /onnxruntime_src/onnxruntime/core/platform/posix/env.cc:142 onnxruntime::{anonymous}::PosixThread::PosixThread(const char*, int, unsigned int ()(int, Eigen::ThreadPoolInterface), Eigen::ThreadPoolInterface*, const onnxruntime::ThreadOptions&) pthread_setaffinity_np failed

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.