• Stars
    star
    905
  • Rank 50,110 (Top 1.0 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created about 5 years ago
  • Updated 8 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.

robustness package

Install via pip: pip install robustness

Read the docs: https://robustness.readthedocs.io/en/latest/index.html

robustness is a package we (students in the MadryLab) created to make training, evaluating, and exploring neural networks flexible and easy. We use it in almost all of our projects (whether they involve adversarial training or not!) and it will be a dependency in many of our upcoming code releases. A few projects using the library include:

We demonstrate how to use the library in a set of walkthroughs and our API reference. Functionality provided by the library includes:

python -m robustness.main --dataset cifar --data /path/to/cifar \
   --adv-train 0 --arch resnet18 --out-dir /logs/checkpoints/dir/
  • Performing input manipulation using robust (or standard) models---this includes making adversarial examples, inverting representations, feature visualization, etc. The library offers a variety of optimization options (e.g. choice between real/estimated gradients, Fourier/pixel basis, custom loss functions etc.), and is easily extendable.
import torch as ch
from robustness.datasets import CIFAR
from robustness.model_utils import make_and_restore_model

ds = CIFAR('/path/to/cifar')
model, _ = make_and_restore_model(arch='resnet50', dataset=ds,
             resume_path='/path/to/model', state_dict_path='model')
model.eval()
attack_kwargs = {
   'constraint': 'inf', # L-inf PGD
   'eps': 0.05, # Epsilon constraint (L-inf norm)
   'step_size': 0.01, # Learning rate for PGD
   'iterations': 100, # Number of PGD steps
   'targeted': True # Targeted attack
   'custom_loss': None # Use default cross-entropy loss
}

_, test_loader = ds.make_loaders(workers=0, batch_size=10)
im, label = next(iter(test_loader))
target_label = (label + ch.randint_like(label, high=9)) % 10
adv_out, adv_im = model(im, target_label, make_adv, **attack_kwargs)
  • Importing robustness as a package, which allows for easy training of neural networks with support for custom loss functions, logging, data loading, and more! A good introduction can be found in our two-part walkthrough (Part 1, Part 2).
from robustness import model_utils, datasets, train, defaults
from robustness.datasets import CIFAR

# We use cox (http://github.com/MadryLab/cox) to log, store and analyze
# results. Read more at https//cox.readthedocs.io.
from cox.utils import Parameters
import cox.store

# Hard-coded dataset, architecture, batch size, workers
ds = CIFAR('/path/to/cifar')
m, _ = model_utils.make_and_restore_model(arch='resnet50', dataset=ds)
train_loader, val_loader = ds.make_loaders(batch_size=128, workers=8)

# Create a cox store for logging
out_store = cox.store.Store(OUT_DIR)

# Hard-coded base parameters
train_kwargs = {
    'out_dir': "train_out",
    'adv_train': 1,
    'constraint': '2',
    'eps': 0.5,
    'attack_lr': 1.5,
    'attack_steps': 20
}
train_args = Parameters(train_kwargs)

# Fill whatever parameters are missing from the defaults
train_args = defaults.check_and_fill_args(train_args,
                        defaults.TRAINING_ARGS, CIFAR)
train_args = defaults.check_and_fill_args(train_args,
                        defaults.PGD_ARGS, CIFAR)

# Train a model
train.train_model(train_args, m, (train_loader, val_loader), store=out_store)

Note: robustness requires PyTorch to be installed with CUDA support.

Pretrained models

Along with the training code, we release a number of pretrained models for different datasets, norms and ε-train values. This list will be updated as we release more or improved models. Please cite this library (see bibtex entry below) if you use these models in your research.

For each (model, ε-test) combination we evaluate 20-step and 100-step PGD with a step size of 2.5 * ε-test / num_steps. Since these two accuracies are quite close to each other, we do not consider more steps of PGD. For each value of ε-test, we highlight the best robust accuracy achieved over different ε-train in bold.

Note #1: We did not perform any hyperparameter tuning and simply used the same hyperparameters as standard training. It is likely that exploring different training hyperparameters will increasse these robust accuracies by a few percent points.

Note #2: The pytorch checkpoint (.pt) files below were saved with the following versions of PyTorch and Dill:

torch==1.1.0
dill==0.2.9

CIFAR10 L2-norm (ResNet50):

CIFAR10 L2-robust accuracy
  ε-train
ε-test 0.0 0.25 0.5 1.0
0.0 95.25% / - 92.77% / - 90.83% / - 81.62% / -
0.25 8.66% / 7.34% 81.21% / 81.19% 82.34% / 82.31% 75.53% / 75.53%
0.5 0.28% / 0.14% 62.30% / 62.13% 70.17% / 70.11% 68.63% / 68.61%
1.0 0.00% / 0.00% 21.18% / 20.66% 40.47% / 40.22% 52.72% / 52.61%
2.0 0.00% / 0.00% 0.58% / 0.46% 5.23% / 4.97% 18.59% / 18.05%

CIFAR10 Linf-norm (ResNet50):

CIFAR10 Linf-robust accuracy
  ε-train
ε-test 0 / 255 8 / 255
0 / 255 95.25% / - 87.03% / -
8 / 255 0.00% / 0.00% 53.49% / 53.29%
16 / 255 0.00% / 0.00% 18.13% / 17.62%

ImageNet L2-norm (ResNet50):

ImageNet L2-robust accuracy
  ε-train
ε-test 0.0 3.0
0.0 76.13% / - 57.90% / -
0.5 3.35% / 2.98% 54.42% / 54.42%
1.0 0.44% / 0.37% 50.67% / 50.67%
2.0 0.16% / 0.14% 43.04% / 43.02%
3.0 0.13% / 0.12% 35.16% / 35.09%

ImageNet Linf-norm (ResNet50):

ImageNet Linf-robust accuracy
  ε-train
ε-test 0.0 4 / 255 8 / 255
0 / 255 76.13% / - 62.42% / - 47.91% / -
4 / 255 0.04% / 0.03% 33.58% / 33.38% 33.06% / 33.03%
8 / 255 0.01% / 0.01% 13.13% / 12.73% 19.63% / 19.52%
16 / 255 0.01% / 0.01% 1.53% / 1.37% 5.00% / 4.82%

Citation

If you use this library in your research, cite it as follows:

@misc{robustness,
   title={Robustness (Python Library)},
   author={Logan Engstrom and Andrew Ilyas and Hadi Salman and Shibani Santurkar and Dimitris Tsipras},
   year={2019},
   url={https://github.com/MadryLab/robustness}
}

(Have you used the package and found it useful? Let us know!).

Maintainers

Contributors/Commiters

More Repositories

1

mnist_challenge

A challenge to explore adversarial robustness of neural networks on MNIST.
Python
720
star
2

cifar10_challenge

A challenge to explore adversarial robustness of neural networks on CIFAR10.
Python
484
star
3

photoguard

Raising the Cost of Malicious AI-Powered Image Editing
Jupyter Notebook
419
star
4

constructed-datasets

Datasets for the paper "Adversarial Examples are not Bugs, They Are Features"
178
star
5

trak

A fast, effective data attribution method for neural networks in PyTorch
Python
169
star
6

robust_representations

Code for "Learning Perceptually-Aligned Representations via Adversarial Robustness"
Jupyter Notebook
158
star
7

backgrounds_challenge

Python
134
star
8

robustness_applications

Notebooks for reproducing the paper "Computer Vision with a Single (Robust) Classifier"
Jupyter Notebook
125
star
9

implementation-matters

Python
104
star
10

EditingClassifiers

Python
95
star
11

robust-features-code

Code for "Robustness May Be at Odds with Accuracy"
Jupyter Notebook
91
star
12

datamodels-data

Data for "Datamodels: Predicting Predictions with Training Data"
Python
64
star
13

blackbox-bandits

Code for "Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors"
Python
61
star
14

BREEDS-Benchmarks

Jupyter Notebook
50
star
15

cox

A lightweight experimental logging library
Python
50
star
16

adversarial_spatial

Investigating the robustness of state-of-the-art CNN architectures to simple spatial transformations.
Python
49
star
17

modeldiff

ModelDiff: A Framework for Comparing Learning Algorithms
Jupyter Notebook
44
star
18

failure-directions

Distilling Model Failures as Directions in Latent Space
Jupyter Notebook
42
star
19

smoothed-vit

Certified Patch Robustness via Smoothed Vision Transformers
Python
41
star
20

label-consistent-backdoor-code

Code for "Label-Consistent Backdoor Attacks"
Python
40
star
21

dataset-interfaces

Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation
Jupyter Notebook
39
star
22

DebuggableDeepNetworks

Jupyter Notebook
37
star
23

data-transfer

Python
31
star
24

ImageNetMultiLabel

Fine-grained ImageNet annotations
Jupyter Notebook
28
star
25

relu_stable

Python
26
star
26

spatial-pytorch

Codebase for "Exploring the Landscape of Spatial Robustness" (ICML'19, https://arxiv.org/abs/1712.02779).
Jupyter Notebook
26
star
27

dataset-replication-analysis

Jupyter Notebook
25
star
28

backdoor_data_poisoning

Python
24
star
29

glm_saga

Minimal, standalone library for solving GLMs in PyTorch
Python
23
star
30

AdvEx_Tutorial

Jupyter Notebook
14
star
31

rethinking-backdoor-attacks

Python
14
star
32

bias-transfer

Python
13
star
33

robustness_lib

Python
12
star
34

journey-TRAK

Code for the paper "The Journey, Not the Destination: How Data Guides Diffusion Models"
Python
12
star
35

datamodels

Python
12
star
36

copriors

Combining Diverse Feature Priors
Python
8
star
37

rla

Residue Level Alignment
Python
8
star
38

missingness

Code for our ICLR 2022 paper "Missingness Bias in Model Debugging"
Jupyter Notebook
5
star
39

fast_l1

Jupyter Notebook
3
star
40

pytorch-lightning-imagenet

Python
3
star
41

post--adv-discussion

HTML
2
star
42

AIaaS_Supply_Chains

Dataset and overview
2
star
43

pytorch-example-imagenet

Python
1
star
44

mnist_challenge_models

1
star
45

robust_model_colab

JavaScript
1
star