TrojanZoo
NOTE: TrojanZoo requires
python>=3.11
,pytorch>=2.0.0
andtorchvision>=0.15.0
, which must be installed manually. Recommend to useconda
to install.
This is the code implementation (pytorch) for our paper in EuroS&P 2022:
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors
TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classification in deep learning. It is composed of two packages: trojanzoo
and trojanvision
. trojanzoo
contains abstract classes and utilities, while trojanvision
contains abstract and concrete ones for image classification task.
Note: This repository is also maintained to cover the implementation of
our kdd 2020 paper AdvMind: Inferring Adversary Intent of Black-Box Attacks
and ccs 2020 paper A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models
Documentation
We have documentation available at https://ain-soph.github.io/trojanzoo.
Screenshot
Features
- Colorful and verbose output!
Note: enable with
--color
for color and--verbose
for verbose.
To open an interactive window with color, usepython - --color
- Modular design (plug and play)
- Good code linting support
- Register your own module to the library.
- Native Pytorch Output
trojanzoo
andtrojanvision
provides API to generate raw pytorch instances, which makes it flexible to work with nativepytorch
and other 3rd party libraries.trojanzoo.datasets.DataSet
can generatetorch.utils.data.Dataset
andtorch.utils.data.DataLoader
trojanzoo.models.Model
attribute_model
istorch.nn.Module
, attributemodel
istorch.nn.DataParallel
Specifically,trojanvision.datasets.ImageSet
can generatetorchvision.datasets.VisionDataset
,trojanvision.datasets.ImageFolder
can generatetorchvision.datasets.ImageFolder
- Enable pytorch native AMP(Automatic Mixed Precision) with
--amp
for training - Flexible Configuration Files
- Good help information to check arguments. (
-h
or--help
) - Detailed and well-organized
summary()
for each module.
Installation
pip install trojanzoo
- (todo)
conda install trojanzoo
docker pull local0state/trojanzoo
ordocker pull ghcr.io/ain-soph/trojanzoo
- (HIGHLY RECOMMEND)
git clone https://github.com/ain-soph/trojanzoo pip install -e trojanzoo
This could install the github repo as a package but avoid copying files to
site_packages
, so that you can easily keep it updated by doinggit pull
.
Quick Start
You can use the provided example scripts to reproduce the evaluation results in our paper.
Note: The program won't save results without
--save
-
Train a model:
e.g.ResNet18
onCIFAR10
with 95% Accpython ./examples/train.py --color --verbose 1 --dataset cifar10 --model resnet18_comp --lr_scheduler --cutout --grad_clip 5.0 --save
-
Test backdoor attack (e.g., BadNet):
e.g.BadNet
withResNet18
onCIFAR10
python ./examples/backdoor_attack.py --color --verbose 1 --pretrained --validate_interval 1 --dataset cifar10 --model resnet18_comp --attack badnet --mark_random_init --epochs 50 --lr 0.01 --save
-
Test backdoor defense (e.g., Neural Cleanse):
e.g.Neural Cleanse
againstBadNet
python ./examples/backdoor_defense.py --color --verbose 1 --pretrained --validate_interval 1 --dataset cifar10 --model resnet18_comp --attack badnet --defense neural_cleanse --mark_random_init --epochs 50 --lr 0.01
IMC
python ./examples/backdoor_attack.py --color --verbose 1 --pretrained --validate_interval 1 --dataset cifar10 --model resnet18_comp --attack imc --mark_random_init --epochs 50 --lr 0.01 --save
AdvMind
(with attack adaptive
and model adaptive
)
python ./examples/adv_defense.py --color --verbose 1 --pretrained --validate_interval 1 --dataset cifar10 --model resnet18_comp --attack pgd --defense advmind --attack_adapt --defense_adapt
Detailed Usage
Configuration file structure
All arguments in the parser are able to set default values in configuration files.
If argument values are not set in the config files, we will use the default values of __init__()
Parameters Config: (priority ascend order)
The higher priority config will override lower priority ones.
Within each priority channel,trojanvision
configs will overwritetrojanzoo
- Package Default:
/trojanzoo/configs/
,/trojanvision/configs/
These are package default settings. Please don't modify them.
You can use this as a template to set other configs. - User Default:
~/.trojanzoo/configs/trojanzoo/
,~/.trojanzoo/configs/trojanvision/
- Workspace Default:
/configs/trojanzoo/
,/configs/trojanvision/
- Custom Config:
--config [config location]
- CMD parameters:
--[parameter] [value]
Store path of Dataset, Model, Attack & Defense Results
Modify them in corresponding config files and command-line arguments.
Dataset:
--data_dir
(./data/data
)
Model:--model_dir
(./data/model
)
Attack:--attack_dir
(./data/attack
)
Defense:--defense_dir
(./data/defense
)
Output Verbose Information:
- CMD modules:
--verbose 1
- Colorful output:
--color
- tqdm:
--tqdm
- Check command-line argument usage:
--help
- AdvMind verbose information:
--output [number]
Use your DIY Dataset/Model/Attack/Defense
- Follow our example to write your DIY class. (
CIFAR10
,ResNet
,IMC
,Neural Cleanse
)It's necessary to subclass our base class. (
Dataset
,Model
,Attack
,Defense
)
Optional base classes depending on your use case: (ImageSet
,ImageFolder
,ImageModel
) - Register your DIY class in
trojanvision
Example:
trojanvision.attacks.class_dict[attack_name]=AttackClass
- Create your config files if necessary.
No need to modify any codes. Just directly add{attack_name}.yml
(.json
) in the config directory. - Good to go!
Todo List
- Sphinx Docs
- Unit test
License
TrojanZoo has a GPL-style license, as found in the LICENSE file.
Cite our paper
@InProceedings{pang:2022:eurosp,
title={TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors},
author={Ren Pang and Zheng Zhang and Xiangshan Gao and Zhaohan Xi and Shouling Ji and Peng Cheng and Ting Wang},
year={2022},
booktitle={Proceedings of IEEE European Symposium on Security and Privacy (Euro S\&P)},
}
@inproceedings{pang:2020:ccs,
title = "{A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models}",
author = {Ren Pang and Hua Shen and Xinyang Zhang and Shouling Ji and Yevgeniy Vorobeychik and Xiapu Luo and Alex Liu and Ting Wang},
year = {2020},
booktitle = {Proceedings of ACM SAC Conference on Computer and Communications (CCS)},
}
@inproceedings{pang:2020:kdd,
title = "{A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models}",
author = {Ren Pang and Xinyang Zhang and Shouling Ji and Xiapu Luo and Ting Wang},
year = {2020},
booktitle = {Proceedings of ACM International Conference on Knowledge Discovery and Data Mining (KDD)},
}