There are no reviews yet. Be the first to send feedback to the community and the maintainers!
DecodingTrust
A Comprehensive Assessment of Trustworthiness in GPT ModelsDBA
DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)Certified-Robustness-SoK-Oldver
This repo keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on popular datasets and paper categorization.VeriGauge
A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]InfoBERT
[ICLR 2021] "InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective" by Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing LiuCRFL
CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)multi-task-learning
Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation", Haoxiang Wang, Han Zhao, Bo Li.Meta-Nerual-Trojan-Detection
SemanticAdv
FLBenchmark-toolkit
Federated Learning Framework Benchmark (UniFed)Robustness-Against-Backdoor-Attacks
RAB: Provable Robustness Against Backdoor AttacksDataLens
[CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long*, Luka Rimanic, Ce Zhang, Bo LiQEBA
Code for CVPR2020 paper QEBA: Query-Efficient Boundary-Based Blackbox AttackBig-but-Invisible-Adversarial-Attack
This repo contains the code for CVPR submission "Big but Invisible Adversarial Attack"Shapley-Study
[CVPR 2021] Scalability vs. Utility: Do We Have to Sacrifice One for the Other in Data Importance Quantification?G-PATE
[NeurIPS 2021] "G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators" by Yunhui Long*, Boxin Wang*, Zhuolin Yang, Bhavya Kailkhura, Aston Zhang, Carl A. Gunter, Bo Liaug-pe
[ICML 2024] Differentially Private Synthetic Data via Foundation Model APIs 2: TextT3
[EMNLP 2020] "T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack" by Boxin Wang, Hengzhi Pei, Boyuan Pan, Qian Chen, Shuohang Wang, Bo LiKNN-PVLDB
Official Repo for "Efficient task-specific data valuation for nearest neighbor algorithms"Transferability-Reduced-Smooth-Ensemble
GMI-Attack
semantic-randomized-smoothing
[CCS 2021] TSS: Transformation-specific smoothing for robustness certificationLinkTeller
[IEEE S&P 22] "LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis" by Fan Wu, Yunhui Long, Ce Zhang, Bo LiSemAttack
[NAACL 2022] "SemAttack: Natural Textual Attacks via Different Semantic Spaces" by Boxin Wang, Chejian Xu, Xiangyu Liu, Yu Cheng, Bo LiUncovering-the-Connections-BetweenAdversarial-Transferability-and-Knowledge-Transferability
code for ICML 2021 paper in which we explore the relationship between adversarial transferability and knowledge transferability.Characterizing-Audio-Adversarial-Examples-using-Temporal-Dependency
ICLR 2019 Paper, "Characterizing Audio Adversarial Examples using Temporal Dependency".Does-Adversairal-Transferability-Indicate-Knowledge-Transferability
code for ICML 2021 paper in which we explore the relationship between adversarial transferability and knowledge transferability.stAdv-Spatially-Transformed-Adversarial-Examples-
Official Implementation of paper https://arxiv.org/abs/1801.02612CoPur
CoPur: Certifiably Robust Collaborative Inference via Feature Purification (NeurIPS 2022)adversarial-glue
[NeurIPS 2021] "Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models" by Boxin Wang*, Chejian Xu*, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, Bo Li.NonLinear-BA
The code accompanying paper: Nonlinear Gradient Estimation for Query Efficient Blackbox AttackCROP
[ICLR 2022] CROP: Certifying Robust Policies for Reinforcement Learning through Functional SmoothingCOPA
[ICLR 2022] COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning AttacksPSBA
[ICML 2021] "Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation" by Jiawei Zhang*, Linyi Li*, Huichen Li, Xiaolu Zhang, Shuang Yang, Bo LiDPFL-Robustness
[CCS 2023] Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning AttacksLayerwise-Orthogonal-Training
SecretGen
A general model inversion attack against large pre-trained models.Certified-Fairness
Code for Certifying Some Distributional Fairness with Subpopulation Decomposition [NeurIPS 2022]Meta-Neural-Kernel
The official implementation of Meta Neural Kernel (MNK), a kernel method for meta-learningTextGuard
TextGuard: Provable Defense against Backdoor Attacks on Text ClassificationKNN-shapley
MMDT
Comprehensive Assessment of Trustworthiness in Multimodal Foundation ModelsFedGame
Official implementation for paper "FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning" (NeurIPS 2023).COPA_Atari
Love Open Source and this site? Check out how you can help us