Giter VIP home page Giter VIP logo

swintrack's People

Contributors

litinglin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

swintrack's Issues

About Model Saving

In which path can I find the saved model? In out_dir path, I can only find files of .wandb type instead of .pth type

samples_per_epoch:131072

run.yaml里的samples_per_epoch初始值设置为131072,那么我在单独训练GOT-10K数据集时,需要将这个值修改为9335吗?

training time and batchsize

Hi, thanks for your work! How long does the swintrack training take, and how much do you set the batchsize to?

Demo file

Hello,

Thankyou for interesting work. Do you have demo file to check result of different models on videos? It will be really helpful.

Best

Input resolution

Dear authors:
I noticed that the pretrain_img_size of swin-transformer-tiny is only 224 and input resolution of your SwinTrack Project is also 224.
I have a question "Could I increase the input resolution to 640 and keep swin-transformer-tiny as the backbone network?"
Thanks and look forward to your reply!

about LaSOT

Hi, thanks for your work! could you please provide LaSOT training_set.txt and testing_set.txt file?

error of evaluation

When I evaluate your model in got10k, I find this error in windows. Could you tell me the reason?
image

训练结果

在got10k,tiny训练时,使用RTX3090,BS(32)进行训练时,在30个epoch的时候已经达到了68的准确率,在训练就发生了过拟合,这正常吗。我看作者的论文是训练了300个epoch,210时学习率衰减。。。

eta<0 in evaluation

When I evaluate the model on testset, such as trackingnet, there shows eta<0 and occurs some errors.
eval-error_trackingnet

Swin_backbone

Excuse me, is there any difference between this and STARK? As far as I know, the author of STARK has verified Swin_backbone and concatenation

Unable to reproduce the values on LaSOT when using the provided raw results

Hi guys

Thanks for the nice work. The numerical results look great. I wanted to compare your method with recent trackers in the same success plot. Unfortunately such a comparison is missing in the paper. Hence, I tried to do this myself
but I can't reproduce your numbers whereas I get the correct numbers for KeepTrack and STARK. Maybe I am doing something wrong since your output bounding box format is different from the ones I have for KeepTrack and STARK.

I saw that you guys use a xyxy format and seem to have a -0.5 offset (not sure why though) whereas the other two use a xywh format. In order to convert the xyxy to xywh I used w = x2-x1+1 and h=y2-y1+1 and added +0.5 for x and y. It seems that the first boxes corresponding to the initial box agree between all trackers using this conversion.

success_plot

I created the success plots using pytracking and downloaded the results for STARK and KeepTrack from the corresponding repos. Maybe I am doing something that you guys were not intending.
But I get 66.7 instead of 68.1 for SwinTrack-T (-1.4), 69.6 instead of 71.1 for SwinTrack-B (-1.5) and 70.2 instead of 71.7 (-1.5) for SwinTrack-B-384. Would be great if you could create a similar plot that compares your trackers with the two in the same success plot and share this plot? And let me know what you did differently?
Can you maybe use in addition the pytracking or pysot-toolkit to evaluate your tracker just to make sure that there is no bug in your evaluation scripts. That would be awesome.

Thanks a lot and best,
Christoph

Error in evaluate

Hey,Mr.Lin,How are you doning?Nice work.I evaluate your model in got10k, I find this error in windows. Have you meet this error before?How did you solve it?Could you give me hand?Thank you very much!

File "/home/cv/code/SwinTrack/datasets/SOT/seed/Impl/GOT10k.py" line 539,in _construct_GOT10k_non_public_data
assert bounding_box.ndim ==1 and bounding_box.shape[0]=4
AssertionError

issues about the Weak aug(Positional Augmentations) in swintrack

hello,Congradulations,nice work.I have some questions about your positional augmentations in paper.What the weak aug means in swintrack?Does it means you did not apply any pre-process deducing random scale and random translation during the search image generation in the training phase?If so, what did you do during the preprocessing stage?
At last,,,,this work makes me feel frustrated,cause I have been tried this method for about 5 months,but I did not make the trackers outperform yours hhh.

RuntimeError: CUDA out of memory

can it run on 4GB video memory GPU?
can i modify some parameters to make sure it will not out of cuda memory?
is there a config file to modify?
just like set epoch from 32 to 4 ,so can reduce the video memory usage?
thank you so much!!!
looking forward to your reply!!!

Questions about LaSOT files

lasot
├── airplane
├── basketball
...
├── training_set.txt
└── testing_set.txt

Excuse me, but could you please explain how can I obtain the training_set.txt and the testing_set.txt ?

'Sampling' should be 'sampling'?

In

if 'Sampling' in parameters:
sampling_parameters = parameters['Sampling']
if 'weight' in sampling_parameters:
for index, dataset_parameters in enumerate(datasets_parameters):
dataset_parameters['Sampling']['weight'] = float(sampling_parameters['weight'] * datasets_weight[index])

the key name is 'Sampling', but in
sampling:
weight: 1

it's 'sampling'
is this a typo?

Train and evaluate with GOT-10k dataset

I want to test this awesome job in got10k but I confront a situation: can't find train_got10k.yaml.could you mind telling me where should I download this file so I can compare TransT and Swintrack via official implementation by got10k official website?

can not run !!!

python main.py SwinTrack Tiny --output_dir /home/lfm/swintrack/out --num_workers 4

Traceback (most recent call last):
File "main.py", line 6, in
main(os.path.dirname(file))
File "/home/lfm/swintrack/core/entry/main.py", line 104, in main
entry(args)
File "/home/lfm/swintrack/core/entry/entry.py", line 68, in entry
build_and_run(runtime_vars, config, wandb_instance)
File "/home/lfm/swintrack/core/entry/build_and_run.py", line 21, in build_and_run
run = build(runtime_vars, network_config, global_synchronized_rng, local_rng, wandb_instance)
File "/home/lfm/swintrack/core/run/builder.py", line 312, in build
model, pseudo_data_generator, running_tasks, global_event_dispatcher, default_logger, num_epochs = build_running_tasks(runtime_vars, config, global_rng, local_rng, wandb_instance)
File "/home/lfm/swintrack/core/run/builder.py", line 262, in build_running_tasks
_build_data_loaders(runtime_vars, branch_config, data_config, config, global_rng, local_rng, building_context)
File "/home/lfm/swintrack/core/run/builder.py", line 180, in _build_data_loaders
data_loader, host_data_pipelines = build_data_source( # host_data_pipelines = [ { type: instance } ]
File "/home/lfm/swintrack/data/tracking/builder.py", line 8, in build_data_source
data_pipelines = build_siamfc_data_source(data_config, runtime_vars, config, global_synchronized_rng, local_rng, event_register, context)
File "/home/lfm/swintrack/data/tracking/methods/SiamFC/builders/source.py", line 16, in build_siamfc_data_source
dataset, worker_init_fn = build_siamfc_dataset(sampling_config, dataset_config, data_processor,
File "/home/lfm/swintrack/data/tracking/methods/SiamFC/builders/components/dataset.py", line 16, in build_siamfc_dataset
datasets, dataset_parameters = build_datasets(dataset_config)
File "/home/lfm/swintrack/data/tracking/methods/_common/builders/build_datasets.py", line 23, in build_datasets
datasets, dataset_parameters = build_dataset_from_config_distributed_awareness(dataset_config, _customized_dataset_parameter_handler)
File "/home/lfm/swintrack/data/utils/dataset.py", line 8, in build_dataset_from_config_distributed_awareness
return build_datasets_from_config(config, user_defined_parameters_handler)
File "/home/lfm/swintrack/datasets/easy_builder/builder.py", line 148, in build_datasets_from_config
return build_datasets(config, unknown_parameter_handler)
File "/home/lfm/swintrack/datasets/easy_builder/builder.py", line 120, in build_datasets
seed = seed_class(root_path=path)
File "/home/lfm/swintrack/datasets/SOT/seed/LaSOT.py", line 9, in init
super(LaSOT_Seed, self).init('LaSOT', root_path, data_split, 2)
File "/home/lfm/swintrack/datasets/base/factory_seed.py", line 11, in init
assert root_path is not None and len(root_path) > 0
AssertionError

Questions about dataset indexes

Many thanks to the authors for their contributions. But I would like to complain that when I read the code, it often feels scattered and cumbersome. There are some very important questions. The code reads the index of the dataset from the datasets/cache file. The pre-prepared index does not seem to correspond to the size of the dataset. It appears to be filtered. This part of the operation seems opaque, so I wonder what's going on here?

How can I determine whether the model has converged to a final state?

Nice work!BTW,how can I determine whether my network has converged to a good state or not?By checking out the loss print?Please give me a hand if you have time,thanks you.Could you provide your original training print?Can I remove the val Option codes during training?Please help me,thanks you very very very much.

how to use this code without wandb in offline

hello.
first,thank you for your best work. I'm studying object tracking with your SwinTrack. but because of some situation, I don't use this in online. so how to learn and training your project in offline?(do I just add --wandb_offline in python main.py SwinTrack Tiny?)
thank, you.

issue about link of dataset metadata cache

The google drive link of metadata cache isn't availible for china. Could you provide a Baidu cloud one plz? I'm sorry about any inconvenience that I bring to you sincerely.

Hello!About the track speed evaluation

Hello!During the test, I set time stamps before and after tracking a single frame by referring to the time module, but it is not as fast as described in the paper. Isn't the evaluation of tracking time calculated in this way?

train with 2 gpu

hello, i want to know how set when training on a single machine with 2 gpus?

The speed of the tiny model only has 35 FPS?

Hello, congratulations, nice work. I have some questions:

I use a tiny model to test an RGB tracking dataset and the final output shows only 35.04 FPS in performance.cvs
In the paper, the tiny model shows about 100 FPS.
why?

GPU: NVIDIA TITAN RTX (24GB)

relative position index problem

hello, very nice work for visual tracking
I have a problem on relative position index, suppose z_shape = (1,1) x_shape = (2,2), the function 'generate_2d_concatenated_self_attention_relative_positional_encoding_index(z_shape, x_shape)' in https://github.com/LitingLin/SwinTrack/blob/main/models/methods/SwinTrack/positional_encoding/untied/relative.py generate relative index as below
tensor([[ 7, 8, 5, 2, 0],
[ 9, 10, 6, 3, 1],
[11, 12, 10, 4, 3],
[14, 15, 13, 10, 6],
[16, 17, 15, 12, 10]])
it is weird that row=0, col=4 '2' is different from row=1, col=5, '1' and
row=1, col=4 '3' is equal to row=2, col=5, '3'
another problem is that why the decoder calculate by concat(z, x) and x, not z and x
image

Repo License

Hello,

Your work looks impressive, i wonder what license would you use for this work, something like Stark ?

i find this problem

when i use ~/SwinTrack$ python main.py SwinTrack Tiny --output_dir /home/zhangruilin/SwinTrack/output --mixin_config got10k.yaml

it returns me
Traceback (most recent call last): File "main.py", line 6, in <module> main(os.path.dirname(__file__)) File "/home/zhangruilin/SwinTrack/core/entry/main.py", line 110, in main entry(args) File "/home/zhangruilin/SwinTrack/core/entry/entry.py", line 73, in entry build_and_run(runtime_vars, config, wandb_instance) File "/home/zhangruilin/SwinTrack/core/entry/build_and_run.py", line 21, in build_and_run run = build(runtime_vars, network_config, global_synchronized_rng, local_rng, wandb_instance) File "/home/zhangruilin/SwinTrack/core/run/builder.py", line 312, in build model, pseudo_data_generator, running_tasks, global_event_dispatcher, default_logger, num_epochs = build_running_tasks(runtime_vars, config, global_rng, local_rng, wandb_instance) File "/home/zhangruilin/SwinTrack/core/run/builder.py", line 262, in build_running_tasks _build_data_loaders(runtime_vars, branch_config, data_config, config, global_rng, local_rng, building_context) File "/home/zhangruilin/SwinTrack/core/run/builder.py", line 182, in _build_data_loaders data_context.event_register, data_context.context, is_training) File "/home/zhangruilin/SwinTrack/data/tracking/builder.py", line 8, in build_data_source data_pipelines = build_siamfc_data_source(data_config, runtime_vars, config, global_synchronized_rng, local_rng, event_register, context) File "/home/zhangruilin/SwinTrack/data/tracking/methods/SiamFC/builders/source.py", line 18, in build_siamfc_data_source runtime_vars.rank) File "/home/zhangruilin/SwinTrack/data/tracking/methods/SiamFC/builders/components/dataset.py", line 16, in build_siamfc_dataset datasets, dataset_parameters = build_datasets(dataset_config) File "/home/zhangruilin/SwinTrack/data/tracking/methods/_common/builders/build_datasets.py", line 23, in build_datasets datasets, dataset_parameters = build_dataset_from_config_distributed_awareness(dataset_config, _customized_dataset_parameter_handler) File "/home/zhangruilin/SwinTrack/data/utils/dataset.py", line 8, in build_dataset_from_config_distributed_awareness return build_datasets_from_config(config, user_defined_parameters_handler) File "/home/zhangruilin/SwinTrack/datasets/easy_builder/builder.py", line 148, in build_datasets_from_config return build_datasets(config, unknown_parameter_handler) File "/home/zhangruilin/SwinTrack/datasets/easy_builder/builder.py", line 120, in build_datasets seed = seed_class(root_path=path) File "/home/zhangruilin/SwinTrack/datasets/SOT/seed/LaSOT.py", line 9, in __init__ super(LaSOT_Seed, self).__init__('LaSOT', root_path, data_split, 2) File "/home/zhangruilin/SwinTrack/datasets/base/factory_seed.py", line 11, in __init__ assert root_path is not None and len(root_path) > 0 AssertionError
i am glade to receive any advices

Running

Do you need an Internet connection to run?

run error

Have you encountered this problem? How to solve it?
image

Training error with multiple nodes using LaSOT

Training errors with multiple nodes using LaSOT:

[E ProcessGroupNCCL.cpp:587] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1800265 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:341] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down

visualize the tracking result

How can I visualize the tracking result on test datasets, such as LaSOT_TEST, TrackingNet, and so on. Is there any interface offered or where should I change the original code source? Thanks advance!

Reproduce the pre-trained model.

Is it correct that in order to reproduce the pre-trained model you have to train the model on these 4 datasets?

We train SwinTrack using training splits of LaSOT [15], TrackingNet [33], GOT-10k [20] (1,000 videos
are removed for fair comparisons with other trackers [22,44]) and COCO 2017

KeyError: 'metrics'

when I run main.py in order to test got10k(test) only, KeyError: 'metrics' is found. what should I do? the fully log as follow:

ssh://[email protected]:22/home/rzh/anaconda3/bin/python3 -u /home/rzh/win-pycharm/SwinTrack/main.py SwinTrack Tiny --mixin_config got10k.yaml
Not using distributed mode
Traceback (most recent call last):
File "/home/rzh/win-pycharm/SwinTrack/main.py", line 6, in
main(os.path.dirname(file))
File "/home/rzh/win-pycharm/SwinTrack/core/entry/main.py", line 104, in main
entry(args)
File "/home/rzh/win-pycharm/SwinTrack/core/entry/entry.py", line 25, in entry
load_static_mixin_config_and_apply_rules(runtime_vars, config)
File "/home/rzh/win-pycharm/SwinTrack/core/entry/mixin_utils.py", line 87, in load_static_mixin_config_and_apply_rules
apply_mixin_rules(mixin_config['mixin'], config, None)
File "/home/rzh/win-pycharm/SwinTrack/core/entry/mixin_utils.py", line 64, in apply_mixin_rules
_apply_mixin_rule(fixed_modification_rule, config, None)
File "/home/rzh/win-pycharm/SwinTrack/core/entry/mixin_utils.py", line 24, in _apply_mixin_rule
mod_config(config, query_path, value)
File "/home/rzh/win-pycharm/SwinTrack/core/entry/dict_flatten_accessor.py", line 24, in mod_config
config = config[sub_path] if not sub_path.isdigit() else config[int(sub_path)]
KeyError: 'metrics'

训练时间过长问题

您好,首先感谢您的工作!我的问题是,我在单张A100上尝试训练got10k微调原来的预训练模型,却需要10小时才能走完一个epoch,但是我的朋友在单张3090上仅仅需要不到1小时,预训练不管是swintrack还是swintransformer我们都加载了,请问是什么原因造成了这一现象,以及我如何调整才能加快训练速度呢?

How does this code part work? _check_if_data_split_in_range_and_append_to_list

In this code part, how does check_if_data_split_in_range in _check_if_data_split_in_range_and_append_to_list mean?

def _check_if_data_split_in_range_and_append_to_list(data_split):
if seed.data_split & data_split:
new_seed = copy.copy(seed)
new_seed.data_split = data_split
expanded_seeds.append(new_seed)
_check_if_data_split_in_range_and_append_to_list(DataSplit.Training)
_check_if_data_split_in_range_and_append_to_list(DataSplit.Validation)
_check_if_data_split_in_range_and_append_to_list(DataSplit.Testing)
_check_if_data_split_in_range_and_append_to_list(DataSplit.Challenge)

Another question: from my understanding, I think it can be implmented in a simplified way:

def _check_if_data_split_in_range_and_append_to_list(data_split):
    if seed.data_split == data_split:
        expanded_seeds.append(seed)
_check_if_data_split_in_range_and_append_to_list(DataSplit.Training)
_check_if_data_split_in_range_and_append_to_list(DataSplit.Validation)
_check_if_data_split_in_range_and_append_to_list(DataSplit.Testing)
_check_if_data_split_in_range_and_append_to_list(DataSplit.Challenge)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.