• Stars
    star
    4
  • Rank 3,286,831 (Top 66 %)
  • Language
    Python
  • License
    MIT License
  • Created about 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A general model inversion attack against large pre-trained models.

More Repositories

1

DecodingTrust

A Comprehensive Assessment of Trustworthiness in GPT Models
Python
240
star
2

DBA

DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)
Python
173
star
3

Certified-Robustness-SoK-Oldver

This repo keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on popular datasets and paper categorization.
98
star
4

VeriGauge

A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]
C
85
star
5

InfoBERT

[ICLR 2021] "InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective" by Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu
Python
83
star
6

CRFL

CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)
Python
69
star
7

multi-task-learning

Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation", Haoxiang Wang, Han Zhao, Bo Li.
Python
64
star
8

Meta-Nerual-Trojan-Detection

Python
57
star
9

SemanticAdv

Python
54
star
10

FLBenchmark-toolkit

Federated Learning Framework Benchmark (UniFed)
Python
47
star
11

Robustness-Against-Backdoor-Attacks

RAB: Provable Robustness Against Backdoor Attacks
Python
35
star
12

DataLens

[CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long*, Luka Rimanic, Ce Zhang, Bo Li
Python
35
star
13

QEBA

Code for CVPR2020 paper QEBA: Query-Efficient Boundary-Based Blackbox Attack
Python
30
star
14

Big-but-Invisible-Adversarial-Attack

This repo contains the code for CVPR submission "Big but Invisible Adversarial Attack"
Python
30
star
15

Shapley-Study

[CVPR 2021] Scalability vs. Utility: Do We Have to Sacrifice One for the Other in Data Importance Quantification?
Python
29
star
16

G-PATE

[NeurIPS 2021] "G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators" by Yunhui Long*, Boxin Wang*, Zhuolin Yang, Bhavya Kailkhura, Aston Zhang, Carl A. Gunter, Bo Li
Python
27
star
17

T3

[EMNLP 2020] "T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack" by Boxin Wang, Hengzhi Pei, Boyuan Pan, Qian Chen, Shuohang Wang, Bo Li
Python
26
star
18

KNN-PVLDB

Official Repo for "Efficient task-specific data valuation for nearest neighbor algorithms"
Jupyter Notebook
25
star
19

aug-pe

[ICML 2024] Differentially Private Synthetic Data via Foundation Model APIs 2: Text
Python
23
star
20

Transferability-Reduced-Smooth-Ensemble

Python
22
star
21

GMI-Attack

Python
21
star
22

semantic-randomized-smoothing

[CCS 2021] TSS: Transformation-specific smoothing for robustness certification
Roff
20
star
23

LinkTeller

[IEEE S&P 22] "LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis" by Fan Wu, Yunhui Long, Ce Zhang, Bo Li
Python
19
star
24

SemAttack

[NAACL 2022] "SemAttack: Natural Textual Attacks via Different Semantic Spaces" by Boxin Wang, Chejian Xu, Xiangyu Liu, Yu Cheng, Bo Li
Python
18
star
25

Uncovering-the-Connections-BetweenAdversarial-Transferability-and-Knowledge-Transferability

code for ICML 2021 paper in which we explore the relationship between adversarial transferability and knowledge transferability.
Python
16
star
26

Characterizing-Audio-Adversarial-Examples-using-Temporal-Dependency

ICLR 2019 Paper, "Characterizing Audio Adversarial Examples using Temporal Dependency".
Python
11
star
27

Does-Adversairal-Transferability-Indicate-Knowledge-Transferability

code for ICML 2021 paper in which we explore the relationship between adversarial transferability and knowledge transferability.
Python
11
star
28

Knowledge-Enhanced-Machine-Learning-Pipeline

Repository for Knowledge Enhanced Machine Learning Pipeline (KEMLP)
Python
10
star
29

stAdv-Spatially-Transformed-Adversarial-Examples-

Official Implementation of paper https://arxiv.org/abs/1801.02612
Python
10
star
30

CoPur

CoPur: Certifiably Robust Collaborative Inference via Feature Purification (NeurIPS 2022)
Python
9
star
31

NonLinear-BA

The code accompanying paper: Nonlinear Gradient Estimation for Query Efficient Blackbox Attack
Python
9
star
32

adversarial-glue

[NeurIPS 2021] "Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models" by Boxin Wang*, Chejian Xu*, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, Bo Li.
Python
9
star
33

CROP

[ICLR 2022] CROP: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing
Python
8
star
34

COPA

[ICLR 2022] COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
8
star
35

PSBA

[ICML 2021] "Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation" by Jiawei Zhang*, Linyi Li*, Huichen Li, Xiaolu Zhang, Shuang Yang, Bo Li
Python
6
star
36

DPFL-Robustness

[CCS 2023] Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
Python
6
star
37

Layerwise-Orthogonal-Training

Python
5
star
38

Certified-Fairness

Code for Certifying Some Distributional Fairness with Subpopulation Decomposition [NeurIPS 2022]
Python
4
star
39

Meta-Neural-Kernel

The official implementation of Meta Neural Kernel (MNK), a kernel method for meta-learning
Python
4
star
40

TextGuard

TextGuard: Provable Defense against Backdoor Attacks on Text Classification
Python
4
star
41

KNN-shapley

Python
3
star
42

MMDT

Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models
Jupyter Notebook
3
star
43

FedGame

Official implementation for paper "FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning" (NeurIPS 2023).
2
star
44

COPA_Atari

Python
1
star