• Stars
    star
    122
  • Rank 283,015 (Top 6 %)
  • Language
    Python
  • License
    MIT License
  • Created about 2 years ago
  • Updated 11 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

MixMIM: Mixed and Masked Image Modeling for Efficient Visual Representation Learning

Pytorch implementation of MixMAE (CVPR 2023)

tenser

This repo is the offcial implementation of the paper MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers

@article{MixMAE,
  author  = {Jihao Liu, Xin Huang, Jinliang Zheng, Yu Liu, Hongsheng Li},
  journal = {arXiv:2205.13137},
  title   = {MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers},
  year    = {2022},
}

Availble pretrained models

Models Params (M) FLOPs (G) Pretrain Epochs Top-1 Acc. Pretrain_ckpt Finetune_ckpt
Swin-B/W14 88 16.3 600 85.1 base_600ep base_600ep_ft
Swin-B/W16-384x384 89.6 52.6 600 86.3 base_600ep base_600ep_ft_384x384
Swin-L/W14 197 35.9 600 85.9 large_600ep large_600ep_ft
Swin-L/W16-384x384 199 112 600 86.9 large_600ep large_600ep_ft_384x384

Training and evaluation

We use Slurm for multi-node distributed pretraining and finetuning.

Pretrain

sh exp/base_600ep/pretrain.sh partition 16 /path/to/imagenet
  • Training with 16 GPUs on your partition.
  • Batch size is 128 * 16 = 2048.
  • Default setting is to train for 600 epochs with mask ratio of 0.5.

Finetune

sh exp/base_600ep/finetune.sh partition 8 /path/to/imagenet
  • Training with 8 GPUs on your partition.
  • Batch size is 128 * 8 = 1024.
  • Default setting is to finetune for 100 epochs.