Giter VIP home page Giter VIP logo

dropmae's Introduction

DropMAE

🌟 The codes for our CVPR 2023 paper 'DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks'. [Link]

If you find our work useful in your research, please consider citing:

@inproceedings{dropmae2023,
  title={DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks},
  author={Qiangqiang Wu and Tianyu Yang and Ziquan Liu and Baoyuan Wu and Ying Shan and Antoni B. Chan},
  booktitle={CVPR},
  year={2023}
}

Overall Architecture

Frame Reconstruction Results.

  • DropMAE leverages more temporal cues for reconstruction.

Catalog

  • Pre-training Code
  • Pre-trained Models
  • Fine-tuning Code for VOT
  • Fine-tuned Models for VOT
  • Fine-tuning Code for VOS
  • Fine-tuned Models for VOS

Environment setup

  • This repo is a modification based on the MAE repo. Installation follows that repo. You can also check our requirements file.

Dataset Download

  • In the dropmae pre-training, we mainly use Kinetics Datasets, which can be download in this Link. We use its training raw videos (*.mp4) for training. The detailed download instruction can also be found here.

DropMAE pre-training

To pre-train ViT-Base (the default configuration) with multi-node distributed training, run the following on 8 nodes with 8 GPUs each:

python -m torch.distributed.launch --nproc_per_node=8 --nnodes=8 \
--node_rank=$INDEX --master_addr=$CHIEF_IP --master_port=1234  main_pretrain_kinetics.py --batch_size 64 \
--model mae_vit_base_patch16 \
--norm_pix_loss \
--mask_ratio 0.75 \
--epochs 400 \
--warmup_epochs 40 \
--blr 1.5e-4 \
--weight_decay 0.05 \
--P 0.1 \
--frame_gap 50 \
--data_path $data_path_to_k400_training_videos \
--output_dir $output_dir \
--log_dir $log_dir
  • Here the effective batch size is 64 (batch_size per gpu) * 8 (nodes) * 8 (gpus per node) = 4096. If memory or # gpus is limited, use --accum_iter to maintain the effective batch size, which is similar to MAE.
  • P is the spatial-attention dropout ratio for DropMAE.
  • data_path indicates the Kinetics (e.g., K400 and K700) training video folder path.
  • The exact same hyper-parameters and configs (initialization, augmentation, etc.) are used in our implementation w/ MAE.

Training logs

The pre-training logs of K400-1600E and K700-800E are provided.

Pre-trained Models

  • We also provide the pre-trained models (ViT-Base) on K400 and K800 datasets.
  • Conviniently, you could try your tracking model w/ our pre-trained models as the initialization weights for improving downstream performance.
K400-1600E K700-800E
pre-trained checkpoint download download

Fine-tuning on VOT

  • The OSTrack w/ our DropMAE pre-trained models can achieve state-of-the-art performance on existing popular tracking benchmarks.
Tracker GOT-10K (AO) LaSOT (AUC) LaSOT (AUC) TrackingNet (AUC) TNL2K(AUC)
DropTrack-K700-ViTBase 75.9 71.8 52.7 84.1 56.9
  • The detailed fine-tuning codes && models can be found in our DropTrack repository.

Fine-tuning on VOS

  • The detailed VOS fine-tuning can be found in our DropSeg repository.

dropmae's People

Contributors

jimmy-dq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar Jeremy avatar  avatar Zhiwen Chen avatar  avatar  avatar Alexey Nekrasov avatar  avatar  avatar  avatar Carla Pollich avatar Hao Wu avatar yahooo avatar GAAP avatar  avatar 이승훈 avatar  avatar  avatar  avatar Xuchen Li (李旭宸) avatar 白开水mq avatar Junlin Chang avatar winkness avatar config avatar  avatar Neo avatar Dat Nguyen-Thanh avatar fengye1966 avatar Florin Shen avatar  avatar Toy avatar  avatar  avatar  avatar 清晨的雾 avatar zsdfSfew34erwsdvfcxnjtyo 6 avatar  avatar shipeng avatar Shiyu HU (胡世宇) avatar wangyu_zjut avatar oyly avatar Chen Liang avatar  avatar 要做鹰的小菜鸟儿 avatar  avatar Guo Pinxue avatar  avatar  avatar Dingkang Liang avatar  avatar  avatar HuiminWuHuiminWu avatar yan.xia avatar Rui avatar  avatar

Watchers

 avatar

dropmae's Issues

Pretrained checkpoint

Thank you so much for sharing the code of this work!

Do you mind releasing the pretrained ViT checkpoint?

Many thanks!

Finetune Codes

Thank you so much for sharing the checkpoint and codes of pretraining. And do you mind releasing the finetune codes for vot and vos? Thanks a lot.

Download of Kinetics Datasets

Dear Authors:
Thanks for your great work!
I have downloaded the Kinetics 700-2020 which is provided in this link.
However, i find there are just urls in *.json rather than video files after decompressing it.
Could you provide some scripts to download videos files from the urls, splitting them according to *.csv?
I am very grateful if you provide further instructions about Kinetics dataset!

A question about the benchmark results.

Congrats for such a great work! May I ask why the AiATrack results in Table 2 of your arXiv verison are so low? I am happy to provide the raw results or any other kinds of help if you meet trouble in reproducing AiATrack. Thanks!

VideoMAE vs DropMAE

I noticed someone earlier pretrain the vit using masked autodecoder on video data: eg. VideoMAE and VideoMAEv2. Do you think that pretrained vit could also be finetuned in downstream vot and vos tasks. And it has the similiar spirit with your work?

About Checkpoint

I'm impressed by your great work!

When loading the model using a checkpoint, the keys do not match.

Could you share full checkpoint that includes decoder?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.