⚠️ Deprecated
I no longer include up-to-date papers, but the list is still a good reference for starters.
Awesome Adversarial Machine Learning:
A curated list of awesome adversarial machine learning resources, inspired by awesome-computer-vision.
Table of Contents
Blogs
- Breaking Linear Classifiers on ImageNet, A. Karpathy et al.
- Breaking things is easy, N. Papernot & I. Goodfellow et al.
- Attacking Machine Learning with Adversarial Examples, N. Papernot, I. Goodfellow, S. Huang, Y. Duan, P. Abbeel, J. Clark.
- Robust Adversarial Examples, Anish Athalye.
- A Brief Introduction to Adversarial Examples, A. Madry et al.
- Training Robust Classifiers (Part 1), A. Madry et al.
- Adversarial Machine Learning Reading List, N. Carlini
- Recommendations for Evaluating Adversarial Example Defenses, N. Carlini
Papers
General
- Intriguing properties of neural networks, C. Szegedy et al., arxiv 2014
- Explaining and Harnessing Adversarial Examples, I. Goodfellow et al., ICLR 2015
- Motivating the Rules of the Game for Adversarial Example Research, J. Gilmer et al., arxiv 2018
- Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning, B. Biggio, Pattern Recognition 2018
Attack
Image Classification
- DeepFool: a simple and accurate method to fool deep neural networks, S. Moosavi-Dezfooli et al., CVPR 2016
- The Limitations of Deep Learning in Adversarial Settings, N. Papernot et al., ESSP 2016
- Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples, N. Papernot et al., arxiv 2016
- Adversarial Examples In The Physical World, A. Kurakin et al., ICLR workshop 2017
- Delving into Transferable Adversarial Examples and Black-box Attacks Liu et al., ICLR 2017
- Towards Evaluating the Robustness of Neural Networks N. Carlini et al., SSP 2017
- Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples, N. Papernot et al., Asia CCS 2017
- Privacy and machine learning: two unexpected allies?, I. Goodfellow et al.
Reinforcement Learning
- Adversarial attacks on neural network policies, S. Huang et al, ICLR workshop 2017
- Tactics of Adversarial Attacks on Deep Reinforcement Learning Agents, Y. Lin et al, IJCAI 2017
- Delving into adversarial attacks on deep policies, J. Kos et al., ICLR workshop 2017
Segmentation & Object Detection
- Adversarial Examples for Semantic Segmentation and Object Detection, C. Xie, ICCV 2017
VAE-GAN
- Adversarial examples for generative models, J. Kos et al. arxiv 2017
Speech Recognition
- Audio Adversarial Examples: Targeted Attacks on Speech-to-Text, N. Carlini et al., arxiv 2018
Questiona Answering System
- Adversarial Examples for Evaluating Reading Comprehension Systems, R. Jia et al., EMNLP 2017
Defence
Adversarial Training
- Adversarial Machine Learning At Scale, A. Kurakin et al., ICLR 2017
- Ensemble Adversarial Training: Attacks and Defenses, F. Tramèr et al., arxiv 2017
Defensive Distillation
- Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks, N. Papernot et al., SSP 2016
- Extending Defensive Distillation, N. Papernot et al., arxiv 2017
Generative Model
- PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples, Y. Song et al., ICLR 2018
- Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight, Y. Lin et al., NIPS workshop 2017
Regularization
- Distributional Smoothing with Virtual Adversarial Training, T. Miyato et al., ICLR 2016
- Adversarial Training Methods for Semi-Supervised Text Classification, T. Miyato et al., ICLR 2017
Others
- Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, A. Nguyen et al., CVPR 2015
Talks
- Do Statistical Models Understand the World?, I. Goodfellow, 2015
- Classifiers under Attack, David Evans, 2017
- Adversarial Examples in Machine Learning, Nicolas Papernot, 2017
- Poisoning Behavioral Malware Clustering, Biggio. B, Rieck. K, Ariu. D, Wressnegger. C, Corona. I. Giacinto, G. Roli. F, 2014
- Is Data Clustering in Adversarial Settings Secure?, BBiggio. B, Pillai. I, Rota Bulò. S, Ariu. D, Pelillo. M, Roli. F, 2015
- Poisoning complete-linkage hierarchical clustering, Biggio. B, Rota Bulò. S, Pillai. I, Mura. M, Zemene Mequanint. E, Pelillo. M, Roli. F, 2014
- Is Feature Selection Secure against Training Data Poisoning?, Xiao. H, Biggio. B, Brown. G, Fumera. G, Eckert. C, Roli. F, 2015
- Adversarial Feature Selection Against Evasion Attacks, Zhang. F, Chan. PPK, Biggio. B, Yeung. DS, Roli. F, 2016
Licenses
License
To the extent possible under law, Yen-Chen Lin has waived all copyright and related or neighboring rights to this work.