• Stars
    star
    249
  • Rank 162,987 (Top 4 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Empirical tricks for training robust models (ICLR 2021)

Bag of Tricks for Adversarial Training

Empirical tricks for training state-of-the-art robust models on CIFAR-10. A playground for fine-tuning the basic adversarial training settings.

Bag of Tricks for Adversarial Training (ICLR 2021)

Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, and Jun Zhu.

Environment settings and libraries we used in our experiments

This project is tested under the following environment settings:

  • OS: Ubuntu 18.04.4
  • GPU: Geforce 2080 Ti or Tesla P100
  • Cuda: 10.1, Cudnn: v7.6
  • Python: 3.6
  • PyTorch: >= 1.4.0
  • Torchvision: >= 0.4.0

Acknowledgement

The codes are modifed based on Rice et al. 2020, and the model architectures are implemented by pytorch-cifar.

Threat Model

We consider the most widely studied setting:

  • L-inf norm constraint with the maximal epsilon be 8/255 on CIFAR-10.
  • No accessibility to additional data, neither labeled nor unlabeled.
  • Utilize the PGD-AT framework in Madry et al. 2018.

(Implementations on the TRADES framework can be found here)

Trick Candidates

Importance rate: Critical; Useful; Insignificance

  • Early stopping w.r.t. training epochs (Critical). Early stopping w.r.t. training epochs was first introduced in the code of TRADES, and was later thoroughly studied by Rice et al., 2020. Due to its effectiveness, we regard this trick as a default choice.

  • Early stopping w.r.t. attack intensity (Useful). Early stopping w.r.t. attack iterations was studied by Wang et al. 2019 and Zhang et al. 2020. Here we exploit the strategy of the later one, where the authors show that this trick can promote clean accuracy. The relevant flags include --earlystopPGD indicates whether apply this trick, while '--earlystopPGDepoch1' and '--earlystopPGDepoch2' separately indicate the epoch to increase the tolerence t by one, as detailed in Zhang et al. 2020. (Note that early stopping attack intensity may degrade worst-case robustness under strong attacks)

  • Warmup w.r.t. learning rate (Insignificance). Warmup w.r.t. learning rate was found useful for FastAT, while Rice et al., 2020 found that piecewise decay schedule is more compatible with early stop w.r.t. training epochs. The relevant flags include --warmup_lr indicates whether apply this trick, while --warmup_lr_epoch indicates the end epoch of the gradually increase of learning rate.

  • Warmup w.r.t. epsilon (Insignificance). Qin et al. 2019 use warmup w.r.t. epsilon in their implementation, where the epsilon gradually increase from 0 to 8/255 in the first 15 epochs. Similarly, the relevant flags include --warmup_eps indicates whether apply this trick, while --warmup_eps_epoch indicates the end epoch of the gradually increase of epsilon.

  • Batch size (Insignificance). The typical batch size used for CIFAR-10 is 128 in the adversarial setting. In the meanwhile, Xie et al. 2019 apply a large batch size of 4096 to perform adversarial training on ImageNet, where the model is distributed on 128 GPUs and has quite robust performance. The relevant flag is --batch-size. According to Goyal et al. 2017, we take bs=128 and lr=0.1 as a basis, and scale the lr when we use larger batch size, e.g., bs=256 and lr=0.2.

  • Label smoothing (Useful). Label smoothing is advocated by Shafahi et al. 2019 to mimic the adversarial training procedure. The relevant flags include --labelsmooth indicates whether apply this trick, while --labelsmoothvalue indicates the degree of smoothing applied on the label vectors. When --labelsmoothvalue=0, there is no label smoothing applied. (Note that only moderate label smoothing (~0.2) is helpful, while exccessive label smoothing (>0.3) could be harmful, as observed in Jiang et al. 2020)

  • Optimizer (Insignificance). Most of the AT methods apply SGD with momentum as the optimizer. In other cases, Carmon et al. 2019 apply SGD with Nesterov, and Rice et al., 2020 apply Adam for cyclic learning rate schedule. The relevant flag is --optimizer, which include common optimizers implemented by official Pytorch API and recently proposed gradient centralization trick by Yong et al. 2020.

  • Weight decay (Critical). The values of weight decay used in previous AT methods mainly fall into 1e-4 (e.g., Wang et al. 2019), 2e-4 (e.g., Madry et al. 2018), and 5e-4 (e.g., Rice et al., 2020). We find that slightly different values of weight decay could largely affect the robustness of the adversarially trained models.

  • Activation function (Useful). As shown in Xie et al., 2020a, the smooth alternatives of ReLU, including Softplus and GELU can promote the performance of adversarial training. The relevant flags are --activation to choose the activation, and --softplus_beta to set the beta for Softplus. Other hyperparameters are used by default in the code.

  • BN mode (Useful). TRADES applies eval mode of BN when crafting adversarial examples during training, while PGD-AT methods implemented by Madry et al. 2018 or Rice et al., 2020 use train mode of BN to craft training adversarial examples. As indicated by Xie et al., 2020b, properly dealing with BN layers is critical to obtain a well-performed adversarially trained model, while train mode of BN during multi-step PGD process may blur the distribution.

Baseline setting (on CIFAR-10)

  • Architecture: WideResNet-34-10
  • Optimizer: Momentum SGD with default hyperparameters
  • Total epoch: 110
  • Batch size: 128
  • Weight decay: 5e-4
  • Learning rate: lr=0.1; decay to lr=0.01 at 100 epoch; decay to 0.001 at 105 epoch
  • BN mode: eval

running command for training:

python train_cifar.py --model WideResNet --attack pgd \
                      --lr-schedule piecewise --norm l_inf --epsilon 8 \
                      --epochs 110 --attack-iters 10 --pgd-alpha 2 \
                      --fname auto \
		      --optimizer 'momentum' \
		      --weight_decay 5e-4
                      --batch-size 128 \
		      --BNeval \

Empirical Evaluations

The evaluation results on the baselines are quoted from AutoAttack (evaluation code).

Note that OURS (TRADES) below only change the weight decay value from 2e-4 (used in original TRADES) to 5e-4, and train for 110 epochs (lr decays at 100 and 105 epochs). To run the evaluation script eval_cifar.py, the command should be

python eval_cifar.py --out-dir 'path_to_the_model' --ATmethods 'TRADES'

Here ATmethods refer to the AT framework (e.g., PGDAT or TRADES).

CIFAR-10 (eps = 8/255)

paper Architecture clean AA
OURS (TRADES)[Checkpoint] WRN-34-20 86.43 54.39
OURS (TRADES)[Checkpoint] WRN-34-10 85.48 53.80
(Pang et al., 2020) WRN-34-20 85.14 53.74
(Zhang et al., 2020) WRN-34-10 84.52 53.51
(Rice et al., 2020) WRN-34-20 85.34 53.35

CIFAR-10 (eps = 0.031)

paper Architecture clean AA
OURS (TRADES)[Checkpoint] WRN-34-10 85.34 54.64
(Huang et al., 2020) WRN-34-10 83.48 53.34
(Zhang et al., 2019) WRN-34-10 84.92 53.04

References

If you find the code useful for your research, please consider citing

@inproceedings{pang2021bag,
  title={Bag of Tricks for Adversarial Training},
  author={Pang, Tianyu and Yang, Xiao and Dong, Yinpeng and Su, Hang and Zhu, Jun},
  booktitle={International Conference on Learning Representations (ICLR)},
  year={2021}
}

and/or our related works

@inproceedings{wang2023better,
  title={Better Diffusion Models Further Improve Adversarial Training},
  author={Wang, Zekai and Pang, Tianyu and Du, Chao and Lin, Min and Liu, Weiwei and Yan, Shuicheng},
  booktitle={International Conference on Machine Learning (ICML)},
  year={2023}
}
@inproceedings{pang2022robustness,
  title={Robustness and Accuracy Could be Reconcilable by (Proper) Definition},
  author={Pang, Tianyu and Lin, Min and Yang, Xiao and Zhu, Jun and Yan, Shuicheng},
  booktitle={International Conference on Machine Learning (ICML)},
  year={2022}
}