Awesome Graph Attack and Defense Papers
This repository aims to provide links to works about adversarial attacks and defenses on graph data or GNN (Graph Neural Networks).
If you find this repo helpful, we would really appreciate it if you could cite our survey.
@article{10.1145/3447556.3447566,
author = {Jin, Wei and Li, Yaxing and Xu, Han and Wang, Yiqi and Ji, Shuiwang and Aggarwal, Charu and Tang, Jiliang},
title = {Adversarial Attacks and Defenses on Graphs},
year = {2021},
publisher = {Association for Computing Machinery},
journal = {SIGKDD Explor. Newsl.},
pages = {19–34},
numpages = {16}
}
Contents
- 1. Survey Papers
- 2. Attack Papers (classified according to attack goal)
- 3. Defense Papers
- 4. Certified Robustness Papers
0. Toolbox
Github Repository: DeepRobust (https://github.com/DSE-MSU/DeepRobust)
Corresponding paper: DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses. [paper][documentation]
1. Survey Papers
- Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study. Wei Jin, Yaxin Li, Han Xu, Yiqi Wang, Shuiwang Ji, Charu C Aggarwal, Jiliang Tang. SIGKDD Explorations 2020. [paper] [code]
- A Survey of Adversarial Learning on Graphs. Liang Chen, Jintang Li, Jiaying Peng, Tao Xie, Zengxu Cao, Kun Xu, Xiangnan He, Zibin Zheng. arxiv, 2020. [paper]
- Adversarial Attacks and Defenses in Images, Graphs and Text: A Review. Han Xu, Yao Ma, Haochen Liu, Debayan Deb, Hui Liu, Jiliang Tang, Anil K. Jain. arxiv, 2019. [paper]
- Adversarial Attack and Defense on Graph Data: A Survey. Lichao Sun, Ji Wang, Philip S. Yu, Bo Li. arviv 2018. [paper]
2. Attack Papers
2.1 Targeted Attack
- Are Defenses for Graph Neural Networks Robust? NeurIPS 2022. [paper] [code]
- Transferable Graph Backdoor Attack. RAID 2022. [paper]
- Robustness of Graph Neural Networks at Scale. NeurIPS 2021. [paper] [code]
- Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation. ICLR 2021. [paper][code]
- Adversarial Attacks on Deep Graph Matching. NeurIPS 2020. [paper]
- Adversarial Attack on Large Scale Graph. arxiv 2020. [paper]
- Efficient Evasion Attacks to Graph Neural Networks via Influence Function. arxiv 2020. [paper]
- Graph Backdoor. Zhaohan Xi, Ren Pang, Shouling Ji, Ting Wang. arxiv 2020. [paper]
- Attacking Black-box Recommendations via Copying Cross-domain User Profiles. Wenqi Fan, Tyler Derr, Xiangyu Zhao, Yao Ma, Hui Liu, Jianping Wang, Jiliang Tang, Qing Li. arxiv 2020. [paper]
- Scalable Attack on Graph Data by Injecting Vicious Nodes. Jihong Wang, Minnan Luo, Fnu Suya, Jundong Li, Zijiang Yang, Qinghua Zheng. arxiv 2020. [paper]
- Adversarial Attacks to Scale-Free Networks: Testing the Robustness of Physical Criteria. Jason Gaitonde, Jon Kleinberg, Eva Tardos. arxiv 2020. [paper]
- MGA: Momentum Gradient Attack on Network. Jinyin Chen, Yixian Chen, Haibin Zheng, Shijing Shen, Shanqing Yu, Dan Zhang, Qi Xuan. arxiv 2020. [paper]
- Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models. Xiao Zang, Yi Xie, Jie Chen, Bo Yuan. arxiv, 2020. [paper]
- Time-aware Gradient Attack on Dynamic Network Link Prediction. Jinyin Chen, Jian Zhang, Zhi Chen, Min Du, Feifei Li, Qi Xuan. arxiv 2019. [paper]
- Multiscale Evolutionary Perturbation Attack on Community Detection. Jinyin Chen, Yixian Chen, Lihong Chen, Minghao Zhao, and Qi Xuan. arxiv 2019. [paper]
- Adversarial Examples on Graph Data: Deep Insights into Attack and Defense. Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, Liming Zhu. IJCAI 2019. [paper] [code]
- Data Poisoning Attack against Knowledge Graph Embedding. Hengtong Zhang, Tianhang Zheng, Jing Gao, Chenglin Miao, Lu Su, Yaliang Li, Kui Ren. IJCAI 2019. [paper]
- Attacking Graph-based Classification via Manipulating the Graph Structure. Binghui Wang, Neil Zhenqiang Gong. CCS 2019. [paper]
- A Restricted Black-box Adversarial Framework Towards Attacking Graph Embedding Models. Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, Wenwu Zhu, Junzhou Huang. AAAI 2020. [paper] [code]
- Adversarial Attacks on Node Embeddings via Graph Poisoning. Aleksandar Bojchevski, Stephan Günnemann. ICML 2019. [paper] [code]
- Adversarial Attack on Graph Structured Data. Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, Le Song. ICML 2018. [paper] [code]
- Fast Gradient Attack on Network Embedding. Jinyin Chen, Yangyang Wu, Xuanheng Xu, Yixian Chen, Haibin Zheng, Qi Xuan. arxiv 2018. [paper] [code]
- Adversarial Attacks on Neural Networks for Graph Data. Daniel Zügner, Amir Akbarnejad, Stephan Günnemann. KDD 2018. [paper] [code]
2.2 Untargeted Attack
- Are Defenses for Graph Neural Networks Robust? NeurIPS 2022. [paper] [code]
- Robustness of Graph Neural Networks at Scale. NeurIPS 2021. [paper] [code]
- Attacking Graph Neural Networks at Scale. Simon Geisler, Daniel Zügner, Aleksandar Bojchevski, Stephan Günnemann. AAAI workshop 2021. [paper]
- Towards More Practical Adversarial Attacks on Graph Neural Networks. Jiaqi Ma, Shuangrui Ding, Qiaozhu Mei. NeurIPS 2020. [paper] [code]
- Backdoor Attacks to Graph Neural Networks. Zaixi Zhang, Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong. arxiv 2020. paper
- Adversarial Attack on Hierarchical Graph Pooling Neural Networks. Haoteng Tang, Guixiang Ma, Yurong Chen, Lei Guo, Wei Wang, Bo Zeng, Liang Zhan. arxiv 2020. [paper]
- Non-target-specific Node Injection Attacks on Graph Neural Networks: A Hierarchical Reinforcement Learning Approach. Yiwei Sun, Suhang Wang, Xianfeng Tang, Tsung-Yu Hsieh, Vasant Honavar. WWW 2020. [paper]
- A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning. Xuanqing Liu, Si Si, Xiaojin(Jerry) Zhu, Yang Li, Cho-Jui Hsieh. NeurIPS 2019. [paper]
- Adversarial Examples on Graph Data: Deep Insights into Attack and Defense. Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, Liming Zhu. IJCAI 2019. [paper] [code]
- Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective. Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, Xue Lin. IJCAI 2019. [paper] [code]
- Adversarial Attacks on Node Embeddings via Graph Poisoning. Aleksandar Bojchevski, Stephan Günnemann. ICML 2019. [paper] [code]
- Adversarial Attacks on Graph Neural Networks via Meta Learning. Daniel Zugner, Stephan Gunnemann. ICLR 2019. [paper] [code]
- Attacking Graph Convolutional Networks via Rewiring. Yao Ma, Suhang Wang, Lingfei Wu, Jiliang Tang. arxiv 2019. [paper]
2.3 Attacks on Combinatorial Problems
- Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness. arXiv 2021. [paper]
3. Defense Papers
- Empowering Graph Representation Learning with Test-Time Graph Transformation. ICLR 2023 [paper] [code]
- GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks. LoG 2022 [paper] [code]
- Robustness of Graph Neural Networks at Scale. NeurIPS 2021. [paper] [code]
- Elastic Graph Neural Networks. ICML 2021. [paper] [code]
- Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks. ICML 2021. [paper]
- Integrated Defense for Resilient Graph Matching. ICML 2021. [paper]
- Node Similarity Preserving Graph Convolutional Networks. WSDM 2021. [paper] [code]
- GNNGuard: Defending Graph Neural Networks against Adversarial Attacks. NeurIPS 2020. [paper]
- Graph Contrastive Learning with Augmentations. NeurIPS 2020. [paper] [code]
- Graph Information Bottleneck. NeurIPS 2020. [paper] [code]
- Variational Inference for Graph Convolutional Networks in the Absence of Graph Data and Adversarial Settings. NeurIPS 2020. [paper] [code]
- Reliable Graph Neural Networks via Robust Aggregation. NeurIPS 2020. [paper] [code]
- Graph Structure Learning for Robust Graph Neural Networks. Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, Jiliang Tang. KDD 2020. [paper] [code]
- Robust Detection of Adaptive Spammers by Nash Reinforcement Learning. KDD 2020. [paper] [code]
- Robust Graph Representation Learning via Neural Sparsification. ICML 2020. [paper]
- Robust Collective Classification against Structural Attacks. Kai Zhou, Yevgeniy Vorobeychik. UAI 2020. [paper]
- EDoG: Adversarial Edge Detection For Graph Neural Networks. [paper]
- A Robust Hierarchical Graph Convolutional Network Model for Collaborative Filtering. Shaowen Peng, Tsunenori Mine. arxiv 2020. [paper]
- Tensor Graph Convolutional Networks for Multi-relational and Robust Learning. Vassilis N. Ioannidis, Antonio G. Marques, Georgios B. Giannakis. arxiv 2020. [paper]
- Topological Effects on Attacks Against Vertex Classification. Benjamin A. Miller, Mustafa Çamurcu, Alexander J. Gomez, Kevin Chan, Tina Eliassi-Rad. arxiv 2020. [paper]
- Towards an Efficient and General Framework of Robust Training for Graph Neural Networks. Kaidi Xu, Sijia Liu, Pin-Yu Chen, Mengshu Sun, Caiwen Ding, Bhavya Kailkhura, Xue Lin. arxiv 2020. [paper]
- How Robust Are Graph Neural Networks to Structural Noise? James Fox, Sivasankaran Rajamanickam. arxiv 2020. [paper]
- GraphDefense: Towards Robust Graph Convolutional Networks. Xiaoyun Wang, Xuanqing Liu, Cho-Jui Hsieh. arxiv 2019. [paper]
- All You Need is Low (Rank): Defending Against Adversarial Attacks on Graphs. Negin Entezari, Saba Al-Sayouri, Amirali Darvishzadeh, and Evangelos E. Papalexakis. WSDM 2020. [paper] [code]
- Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure Fuli Feng, Xiangnan He, Jie Tang, Tat-Seng Chua. TKDE 2019. [paper]
- Edge Dithering for Robust Adaptive Graph Convolutional Networks. Vassilis N. Ioannidis, Georgios B. Giannakis. arxiv 2019. [paper]
- GraphSAC: Detecting anomalies in large-scale graphs. Vassilis N. Ioannidis, Dimitris Berberidis, Georgios B. Giannakis. arxiv 2019. [paper]
- Robust Graph Neural Network Against Poisoning Attacks via Transfer Learning. Xianfeng Tang, Yandong Li, Yiwei Sun, Huaxiu Yao, Prasenjit Mitra, Suhang Wang. WSDM 2020. [paper]
- Robust Graph Convolutional Networks Against Adversarial Attacks. Dingyuan Zhu, Ziwei Zhang, Peng Cui, Wenwu Zhu. KDD 2019. [paper]
- Adversarial Examples on Graph Data: Deep Insights into Attack and Defense. Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, Liming Zhu. IJCAI 2019. [paper] [code]
- Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective. Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, Xue Lin. IJCAI 2019. [paper] [code]
- Power up! Robust Graph Convolutional Network against Evasion Attacks based on Graph Powering. Ming Jin, Heng Chang, Wenwu Zhu, Somayeh Sojoudi. arxiv 2019. [paper]
- Latent Adversarial Training of Graph Convolution Networks. Hongwei Jin, Xinhua Zhang. ICML 2019 workshop. [paper]
- Batch Virtual Adversarial Training for Graph Convolutional Networks. Zhijie Deng, Yinpeng Dong, Jun Zhu. ICML 2019 Workshop. [paper]
- Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure. Fuli Feng, Xiangnan He, Jie Tang, Tat-Seng Chua. arXiv, 2019. [paper]
4. Certified Robustness Papers
- Certified Robustness of Graph Convolution Networks for Graph Classification under Topological Attacks. NeurIPS 2020. [paper] [code]
- Adversarial Immunization for Improving Certifiable Robustness on Graphs. Arxiv 2020. [paper]
- Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation. Arxiv 2020. [paper]
- Efficient Robustness Certificates for Graph Neural Networks via Sparsity-Aware Randomized Smoothing. ICML 2020. [paper] [code]
- Certifiable Robustness of Graph Convolutional Networks under Structure Perturbations. KDD 2020. [paper] [code]
- Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing. Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Neil Zhenqiang Gong. WWW 2020. [paper]
- Certifiable Robustness to Graph Perturbations. Aleksandar Bojchevski, Stephan Günnemann. NeurIPS 2019. [paper][code]
- Certifiable Robustness and Robust Training for Graph Convolutional Networks. Daniel Zügner Stephan Günnemann. KDD 2019. [paper] [code]