Giter VIP home page Giter VIP logo

lomar's Introduction

LoMaR (Efficient Self-supervised Vision Pretraining with Local Masked Reconstruction)

This is a PyTorch/GPU implementation of the paper Efficient Self-supervised Vision Pretraining with Local Masked Reconstruction:

  • This repo is a modification on the MAE. Installation and preparation follow that repo.

  • This repo is based on timm==0.3.2, for which a fix is needed to work with PyTorch 1.8.1+.

  • The relative position encoding is modeled by following iRPE. To enable the iRPE with CUDA supported:

cd rpe_ops/
python setup.py install --user

Main Results on ImageNet-1K

Backbones Method Pretrain Epochs Pretrained Weights Pretrain Logs Finetune Logs
ViT/B-16 LoMaR 1600 download download download

Pre-training

Pretrain the model:

python -m torch.distributed.launch --nproc_per_node=4 --nnodes=1 \
--master_addr=127.0.0.1 --master_port=29517 main_pretrain_lomar.py \
    --batch_size 256 \
    --accum_iter 4 \
    --output_dir ${LOG_DIR} \
    --log_dir ${LOG_DIR} \
    --model mae_vit_base_patch16 \
    --norm_pix_loss \
    --distributed \
    --epochs 400 \
    --warmup_epochs 20 \
    --blr 1.5e-4 --weight_decay 0.05 \
    --window_size 7 \
    --num_window 4 \
    --mask_ratio 0.8 \
    --data_path ${IMAGENET_DIR}

Fine-tuning

Finetune the model:

python -m torch.distributed.launch --nproc_per_node=4 --nnodes=1 \
--master_addr=127.0.0.1 --master_port=29510 main_finetune_lomar.py \
    --batch_size 256 \
    --accum_iter 1 \
    --model vit_base_patch16 \
    --finetune ${PRETRAIN_CHKPT} \
    --epochs 100 \
    --log_dir ${LOG_DIR} \
    --blr 5e-4 --layer_decay 0.65 \
    --weight_decay 0.05 --drop_path 0.1 --reprob 0.25 --mixup 0.8 --cutmix 1.0 \
    --dist_eval --data_path ${IMAGENET_DIR}

License

This project is under the CC-BY-NC 4.0 license. See LICENSE for details.

Citation

@article{chen2022efficient,
  title={Efficient Self-supervised Vision Pretraining with Local Masked Reconstruction},
  author={Chen, Jun and Hu, Ming and Li, Boyang and Elhoseiny, Mohamed},
  journal={arXiv preprint arXiv:2206.00790},
  year={2022}
}

lomar's People

Contributors

junchen14 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

lomar's Issues

warm up epochs in 400-E pretraining

Hi JunChen,
Thanks for your great work. In this repos, you show the running command as belowfor 400-E training:
python -m torch.distributed.launch --nproc_per_node=4 --nnodes=1
--master_addr=127.0.0.1 --master_port=29517 main_pretrain_lomar.py
--batch_size 256
--accum_iter 4
--output_dir ${LOG_DIR}
--log_dir ${LOG_DIR}
--model mae_vit_base_patch16
--norm_pix_loss
--distributed
--epochs 400
--warmup_epochs 20
--blr 1.5e-4 --weight_decay 0.05
--window_size 7
--num_window 4
--mask_ratio 0.8
--data_path ${IMAGENET_DIR}
Here 20 epochs are used for warm up. But in your supplementary, you mentioned that 10 for pretraining 400 epochs. What's the exact setting to reproduce the results in Table-1? Does the number of warm up epoch severly affect the final performance?
Many thanks for your time.

Self-supervised pre-training - how to load the dataset?

Hi!
I want to use your project for self-supervised pre-training, but when loading the data, it prompts that the specified path cannot be found, my data is unlabeled picture data all in one folder, and the training set and the test set are not divided? Is it necessary to divide when loading, but if it is divided, isn't there label data?
Hope to hear from you!

nan loss

Hi junchen,
Thanks for your great work. Recently when I try your local reconstruction training, I found the NaN loss is easily occurred during the training. Any suggestions for this? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.