Giter VIP home page Giter VIP logo

fcot's Introduction

Fully Convolutional Online Tracking

The official implementation of FCOT using PyTorch.

[paper] [visualization] [model] [raw results]

fcot

News

  • [2020.7.30] Code and model are now released for the tracker FCOT.
  • [2020.11.29] Bug fix: We will solve the problem soon where the results on VOT2018 are not reproducible. Notes: The tracking speed of 47 FPS is evaluated on GOT-10k dataset using a single 2080ti GPU. While due to the different settings of online training, the tracking speed on other datasets may vary. Excepting for the gpu, evaluation platform also make a difference.

Table of Contents

Overview

In this repo, we provide pretrained model, training and inference code for fcot and also the integarated evaluation APIs. You can easily test trackers on a set of datasets, including VOT2018, TrackingNet, GOT-10k, LaSOT, OTB100, UAV123 and NFS, and use integrated evaluation APIs to evaluate the tracking results (except for TrackingNet and GOT-10k, which should evaluate on the official server).

Installation

Please refer to INSTALL.md for installation instructions. We recommend using the install script. Before running the installation script, make sure you have installed conda with python 3.7 and cuda10.0. And our platform is ubuntu 18.04.

./install.sh YOUR_CONDA_INSTALL_PATH ENVIRONMENT_NAME

Training

We use Lasot, GOT-10k, TrackingNet and COCO to train fcot. Before running the training scripts, you should download the datasets and set the correct datasets path in ltr/admin/local.py. Also remember to download the pretrained dimp50 model to initialize the backbone and classification-18 branch of fcot. Then switch to your conda environment using conda activate $YOUR_CONDA_ENVIRONMENT. The training scripts can be found at bash folder. We use the two following stategies to train fcot (we report the results of the first training strategy in the paper).

  • 3-stages training: Train the backbone, regression branch (except for reg-optimizer) and classification-72 branch for 70 epochs firstly. Then freeze the trained modules and train the regression optimizer for 5 epochs. Lastly, train the classification-18 branch for 25 epochs. You can use the following commands to train fcot in this strategy.
cd bash
./train_fcot_3stages.sh
  • 2-stages training: Train the backbone, regression branch (except for reg-optimizer), classification-72 branch and classification-18 branch jointly. Then freeze that trained modules and train the regression optimizer for 5 epochs.
cd bash
./train_fcot_2stages.sh

Note:

  • You should set the parameters in the scripts. WORKSPACE_STAGE* are the path to save the training models and tensorboard information. --devices_id is the GPUs used to train. --batch_size is the total batch size among all the gpus. In our experiments, we use 8 Nvidia 2080Ti gpus and set the batch size to 40. More parameters can be set at ltr/train_settings/fcot/fcot.py. The results of the two training strategies are discussed at Results.

Test and evaluation

In the pytracking directory, you can test trackers on a set of datasets and use integrated evaluation APIs to evaluate the tracking results.

1. Run the tracker on a set of datasets

In this repo, you can run the tracker on a set of datasets, including VOT2018, GOT-10k, TrackingNet, LaSOT, OTB100, UAV123 and NFS. Before running the tracking scripts, you should set the correct datasets path in pytracking/evaluation/local.py. Then the trained model should be put under network_path that you set in local.py. And the file name of the trained models should be set in params.net of the inference settings file, which is under the parameter/fcot. You can use the scripts under bash to run fcot on datasets, e.g. otb like this:

cd bash
./run_fcot_on_otb.sh

See scripts under bin for the more supported datasets.

2. Evaluate the tracking results on datasets

In this repo, we integrate evaluation APIs of Current Single Object Tracking Dataset, including VOT2018, LaSOT, OTB100, UAV123 and NFS. Put the tracking results under results_path/fcot, where results_path is the path you set in local.py. You can use the scripts under bash to evaluate results of datasets, e.g. otb like this:

cd bash
./eval_fcot_on_otb.sh

See scripts under bin for the more scripts to evaluate on other datasets.

For GOT-10k, TrackingNet, you need to evaluate results on official server, we provide tools to pack tracking results into the zipfile of submission format. Also, put the tracking results under results_path/fcot, you can use the script to pack trackingnet results:

cd bash
./pack_fcot_results_on_tn.sh

The packed zipfile can be found in the path packed_results_path that you set in local.py.

Results

The raw experimental results reported in paper on VOT2018, GOT-10k, TrackingNet, LaSOT, OTB100, UAV123 and NFS benchmarks can be found at Google Drive or Baidu Drive (extraction code:4vtg). The evaluation results of the models trained with the released code are as follows. FCOT-2s and FCOT-3s are trained with the two training strategies as described in Training and based on the released code. The results in the first line is the original ones in the paper and the model is trained with our old messy code using the first strategy. FCOT can achieve an speed of 47 fps on GOT-10k dataset using Nvidia GTX 2080Ti. (And the corresponding tracking speed is around 44 fps on Nvidia GTX 1080Ti.)

Model VOT18
EAO/ROB
OTB100
AUC/PREC
NFS
AUC
UAV123
AUC
LaSOT
AUC/NP
TN
AUC
GOT-10k
AO
DiMP-50 0.440/0.153 68.4/89.4 61.9 64.3 56.9/65.0 74.0 61.1
Original 0.508/0.108 69.3/91.3 63.2 65.4 56.9/67.8 75.1 64.0
FCOT-3s (new) 0.501/0.098 70.2/92.5 63.4 65.3 56.7/67.3 75.0 -
FCOT-2s (new) 0.461/0.126 - - - 57.7/68.5 75.3 -

Acknowledgments

  • pytracking - The implementation of FCOT training and tracking is based on this framework.
  • pysot - The tools to run trackers on VOT2018
  • pysot-toolkit - Evaluation APIs of Current Single Object Tracking Dataset

Contributors

Citation

Please consider citing our paper in your publications if the project helps your research.

@InProceedings{fcot2020,
    title={Fully Convolutional Online Tracking},
    author={Yutao Cui and Cheng Jiang and Limin Wang and Gangshan Wu},
    booktitle={arXiv preprint arXiv:2004.07109},
    year={2020}
}

fcot's People

Contributors

jcaha avatar yutaocui avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fcot's Issues

error:can't import name region

Thank you for your work.
After running the command:
python -c "from pytracking.evaluation.environment import create_default_local_file; create_default_local_file()"
error occured : can't import name region
At the same time, running the command : python run_video.py fcot fcot_lasot video_path, the same error occured :
from the file pytracking\utils\vot_utils\ init.py can't import name region.
How to get rid of this problem?

question

Hello, I have seen your algorithm and I think it is very innovative. I want to reproduce your algorithm, but there are some problems in the process of reproduction. Could you please add your contact information, such as WeChat and qq

About eval_fcot_on_otb.sh

Hi,thanks your excellent work. There are some problems when I run Eval_ fcot_ On_ otb.sh. It fails with the following error message:

Traceback (most recent call last):
File "/home/xu/.conda/envs/fcot/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "../pytracking/benchmarks/ope_benchmark.py", line 52, in eval_success
success_ret_[video.name] = success_overlap(gt_traj, tracker_traj, n_frame)
File "/home/xu/.conda/envs/fcot/lib/python3.7/site-packages/numba/core/dispatcher.py", line 414, in _compile_for_args
error_rewrite(e, 'typing')
File "/home/xu/.conda/envs/fcot/lib/python3.7/site-packages/numba/core/dispatcher.py", line 357, in error_rewrite
raise e.with_traceback(None)
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
non-precise type array(pyobject, 0d, C)
During: typing of argument at ../pytracking/utils/vot_utils/statistics.py (104)

File "utils/vot_utils/statistics.py", line 104:
def success_overlap(gt_bb, result_bb, n_frame):
thresholds_overlap = np.arange(0, 1.05, 0.05)
^

"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "run_evaluation.py", line 52, in
tracker_params), desc='eval success', total=len(tracker_params), ncols=100):
File "/home/xu/.conda/envs/fcot/lib/python3.7/site-packages/tqdm/std.py", line 1167, in iter
for obj in iterable:
File "/home/xu/.conda/envs/fcot/lib/python3.7/multiprocessing/pool.py", line 748, in next
raise value
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
non-precise type array(pyobject, 0d, C)
During: typing of argument at ../pytracking/utils/vot_utils/statistics.py (104)

File "utils/vot_utils/statistics.py", line 104:
def success_overlap(gt_bb, result_bb, n_frame):
thresholds_overlap = np.arange(0, 1.05, 0.05)

How I can fix this? I don't understand the error message.
If it was not right to open aan issue for my problem, I apologize.
I didn't know how to help myself anymore.

About TrackingNet training dataset

Did you use the full trackingnet dataset for training in your code?
And if I want to train your code, do I have to download the 1TB dataset first?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.