• Stars
    star
    129
  • Rank 273,500 (Top 6 %)
  • Language
  • Created about 4 years ago
  • Updated 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

TrojAI Literature Review

The list below contains curated papers and arXiv articles that are related to Trojan attacks, backdoor attacks, and data poisoning on neural networks and machine learning systems. They are ordered "approximately" from most to least recent and articles denoted with a "*" mention the TrojAI program directly. Some of the particularly relevant papers include a summary that can be accessed by clicking the "Summary" drop down icon underneath the paper link. These articles were identified using variety of methods including:

  • A flair embedding created from the arXiv CS subset; details will be provided later.
  • A trained ASReview random forest model
  • A curated manual literature review
  1. Physical Adversarial Attack meets Computer Vision: A Decade Survey

  2. Data Poisoning Attacks Against Multimodal Encoders

  3. MARNet: Backdoor Attacks Against Cooperative Multi-Agent Reinforcement Learning

  4. Not All Poisons are Created Equal: Robust Training against Data Poisoning

  5. Evil vs evil: using adversarial examples against backdoor attack in federated learning

  6. Auditing Visualizations: Transparency Methods Struggle to Detect Anomalous Behavior

  7. Defending Backdoor Attacks on Vision Transformer via Patch Processing

  8. Defense against backdoor attack in federated learning

  9. SentMod: Hidden Backdoor Attack on Unstructured Textual Data

  10. Adversarial poisoning attacks on reinforcement learning-driven energy pricing

  11. Natural Backdoor Datasets

  12. Backdoor Attacks and Defenses in Federated Learning: State-of-the-art, Taxonomy, and Future Directions

  13. VulnerGAN: a backdoor attack through vulnerability amplification against machine learning-based network intrusion detection systems

  14. Hiding Needles in a Haystack: Towards Constructing Neural Networks that Evade Verification

  15. TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors

  16. Camouflaged Poisoning Attack on Graph Neural Networks

  17. BackdoorBench: A Comprehensive Benchmark of Backdoor Learning

  18. Fooling a Face Recognition System with a Marker-Free Label-Consistent Backdoor Attack

  19. Backdoor Attacks on Bayesian Neural Networks using Reverse Distribution

  20. Design of AI Trojans for Evading Machine Learning-based Detection of Hardware Trojans

  21. PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning

  22. Model-Contrastive Learning for Backdoor Defense

  23. Robust Anomaly based Attack Detection in Smart Grids under Data Poisoning Attacks

  24. Disguised as Privacy: Data Poisoning Attacks against Differentially Private Crowdsensing Systems

  25. Poisoning attack toward visual classification model

  26. Verifying Neural Networks Against Backdoor Attacks

  27. VPN: Verification of Poisoning in Neural Networks

  28. LinkBreaker: Breaking the Backdoor-Trigger Link in DNNs via Neurons Consistency Check

  29. A Study of the Attention Abnormality in Trojaned BERTs

  30. Universal Post-Training Backdoor Detection

  31. Planting Undetectable Backdoors in Machine Learning Models

  32. Natural Backdoor Attacks on Deep Neural Networks via Raindrops

  33. MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients

  34. PiDAn: A Coherence Optimization Approach for Backdoor Attack Detection and Mitigation in Deep Neural Networks

  35. ADFL: A Poisoning Attack Defense Framework for Horizontal Federated Learning

  36. Toward Realistic Backdoor Injection Attacks on DNNs using Rowhammer

  37. Execute Order 66: Targeted Data Poisoning for Reinforcement Learning via Minuscule Perturbations

  38. A Feature Based On-Line Detector to Remove Adversarial-Backdoors by Iterative Demarcation

  39. BlindNet backdoor: Attack on deep neural network using blind watermark

  40. DBIA: Data-free Backdoor Injection Attack against Transformer Networks

  41. Backdoor Attack through Frequency Domain

  42. NTD: Non-Transferability Enabled Backdoor Detection

  43. Romoa: Robust Model Aggregation for the Resistance of Federated Learning to Model Poisoning Attacks

  44. Generative strategy based backdoor attacks to 3D point clouds: Work in Progress

  45. Deep Neural Backdoor in Semi-Supervised Learning: Threats and Countermeasures

  46. FooBaR: Fault Fooling Backdoor Attack on Neural Network Training

  47. BFClass: A Backdoor-free Text Classification Framework

  48. Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis

  49. Data Poisoning against Differentially-Private Learners: Attacks and Defenses

  50. DOES DIFFERENTIAL PRIVACY DEFEAT DATA POISONING?

  51. Check Your Other Door! Establishing Backdoor Attacks in the Frequency Domain

  52. HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios

  53. SanitAIs: Unsupervised Data Augmentation to Sanitize Trojaned Neural Networks

  54. COVID-19 Diagnosis from Chest X-Ray Images Using Convolutional Neural Networks and Effects of Data Poisoning

  55. Interpretability-Guided Defense against Backdoor Attacks to Deep Neural Networks

  56. Trojan Signatures in DNN Weights

  57. HOW TO INJECT BACKDOORS WITH BETTER CONSISTENCY: LOGIT ANCHORING ON CLEAN DATA

  58. A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples

  59. Backdoor Attack and Defense for Deep Regression

  60. Use Procedural Noise to Achieve Backdoor Attack

  61. Excess Capacity and Backdoor Poisoning

  62. BatFL: Backdoor Detection on Federated Learning in e-Health

  63. Poisonous Label Attack: Black-Box Data Poisoning Attack with Enhanced Conditional DCGAN

  64. Backdoor Attacks on Network Certification via Data Poisoning

  65. Identifying Physically Realizable Triggers for Backdoored Face Recognition Networks

  66. Simtrojan: Stealthy Backdoor Attack

  67. Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Federated Learning

  68. Quantization Backdoors to Deep Learning Models

  69. Multi-Target Invisibly Trojaned Networks for Visual Recognition and Detection

  70. A Countermeasure Method Using Poisonous Data Against Poisoning Attacks on IoT Machine Learning

  71. FederatedReverse: A Detection and Defense Method Against Backdoor Attacks in Federated Learning

  72. Accumulative Poisoning Attacks on Real-time Data

  73. Inaudible Manipulation of Voice-Enabled Devices Through BackDoor Using Robust Adversarial Audio Attacks

  74. Stealthy Targeted Data Poisoning Attack on Knowledge Graphs

  75. BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection

  76. On the Effectiveness of Poisoning against Unsupervised Domain Adaptation

  77. Simple, Attack-Agnostic Defense Against Targeted Training Set Attacks Using Cosine Similarity

  78. Data Poisoning Attacks Against Outcome Interpretations of Predictive Models

  79. BDDR: An Effective Defense Against Textual Backdoor Attacks

  80. Poisoning attacks and countermeasures in intelligent networks: status quo and prospects

  81. The Devil is in the GAN: Defending Deep Generative Models Against Backdoor Attacks

  82. BadEncoder: Backdoor Attacks to Pre-trainedEncoders in Self-Supervised Learning

  83. BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning

  84. Can You Hear It? Backdoor Attacks via Ultrasonic Triggers

  85. Poisoning Attacks via Generative Adversarial Text to Image Synthesis

  86. Ant Hole: Data Poisoning Attack Breaking out the Boundary of Face Cluster

  87. Poison Ink: Robust and Invisible Backdoor Attack

  88. MT-MTD: Muti-Training based Moving Target Defense Trojaning Attack in Edged-AI network

  89. Text Backdoor Detection Using An Interpretable RNN Abstract Model

  90. Garbage in, Garbage out: Poisoning Attacks Disguised with Plausible Mobility in Data Aggregation

  91. Classification Auto-Encoder based Detector against Diverse Data Poisoning Attacks

  92. Poisoning Knowledge Graph Embeddings via Relation Inference Patterns

  93. Adversarial Training Time Attack Against Discriminative and Generative Convolutional Models

  94. Poisoning of Online Learning Filters: DDoS Attacks and Countermeasures

  95. Rethinking Stealthiness of Backdoor Attack against NLP Models

  96. Robust Learning for Data Poisoning Attacks

  97. SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics

  98. Poisoning the Search Space in Neural Architecture Search

  99. Data Poisoning Won’t Save You From Facial Recognition

  100. Accumulative Poisoning Attacks on Real-time Data

  101. Backdoor Attack on Machine Learning Based Android Malware Detectors

  102. Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning

  103. Indirect Invisible Poisoning Attacks on Domain Adaptation

  104. Fight Fire with Fire: Towards Robust Recommender Systems via Adversarial Poisoning Training

  105. Putting words into the system’s mouth: A targeted attack on neural machine translation using monolingual data poisoning

  106. SUBNET REPLACEMENT: DEPLOYMENT-STAGE BACKDOOR ATTACK AGAINST DEEP NEURAL NETWORKS IN GRAY-BOX SETTING

  107. Spinning Sequence-to-Sequence Models with Meta-Backdoors

  108. Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch

  109. Poisoning and Backdooring Contrastive Learning

  110. AdvDoor: Adversarial Backdoor Attack of Deep Learning System

  111. Defending against Backdoor Attacks in Natural Language Generation

  112. De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks

  113. Poisoning MorphNet for Clean-Label Backdoor Attack to Point Clouds

  114. Provable Guarantees against Data Poisoning Using Self-Expansion and Compatibility

  115. MLDS: A Dataset for Weight-Space Analysis of Neural Networks

  116. Poisoning the Unlabeled Dataset of Semi-Supervised Learning

  117. Regularization Can Help Mitigate Poisioning Attacks. . . With The Right Hyperparameters

  118. Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching

  119. Towards Robustness Against Natural Language Word Substitutions

  120. Concealed Data Poisoning Attacks on NLP Models

  121. Covert Channel Attack to Federated Learning Systems

  122. Backdoor Attacks Against Deep Learning Systems in the Physical World

  123. Backdoor Attacks on Self-Supervised Learning

  124. Transferable Environment Poisoning: Training-time Attack on Reinforcement Learning

  125. Investigation of a differential cryptanalysis inspired approach for Trojan AI detection

  126. Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers

  127. Robust Backdoor Attacks against Deep Neural Networks in Real Physical World

  128. The Design and Development of a Game to Study Backdoor Poisoning Attacks: The Backdoor Game

  129. A Backdoor Attack against 3D Point Cloud Classifiers

  130. Explainability-based Backdoor Attacks Against Graph Neural Networks

  131. DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation

  132. Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective

  133. PointBA: Towards Backdoor Attacks in 3D Point Cloud

  134. Online Defense of Trojaned Models using Misattributions

  135. Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models

  136. SPECTRE: Defending Against Backdoor Attacks Using Robust Covariance Estimation

  137. Black-box Detection of Backdoor Attacks with Limited Information and Data

  138. TOP: Backdoor Detection in Neural Networks via Transferability of Perturbation

  139. T-Miner : A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification

  140. Hidden Backdoor Attack against Semantic Segmentation Models

  141. What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors

  142. Red Alarm for Pre-trained Models: Universal Vulnerabilities by Neuron-Level Backdoor Attacks

  143. Provable Defense Against Delusive Poisoning

  144. An Approach for Poisoning Attacks Against RNN-Based Cyber Anomaly Detection

  145. Backdoor Scanning for Deep Neural Networks through K-Arm Optimization

  146. TAD: Trigger Approximation based Black-box Trojan Detection for AI*

  147. WaNet - Imperceptible Warping-based Backdoor Attack

  148. Data Poisoning Attack on Deep Neural Network and Some Defense Methods

  149. Baseline Pruning-Based Approach to Trojan Detection in Neural Networks*

  150. Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization

  151. Property Inference from Poisoning

  152. TROJANZOO: Everything you ever wanted to know about neural backdoors (but were afraid to ask)

  153. A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification

  154. Detecting Universal Trigger's Adversarial Attack with Honeypot

  155. ONION: A Simple and Effective Defense Against Textual Backdoor Attacks

  156. Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks

  157. Data Poisoning Attacks to Deep Learning Based Recommender Systems

  158. Backdoors hidden in facial features: a novel invisible backdoor attack against face recognition systems

  159. One-to-N & N-to-One: Two Advanced Backdoor Attacks against Deep Learning Models

  160. DeepPoison: Feature Transfer Based Stealthy Poisoning Attack

  161. Policy Teaching via Environment Poisoning:Training-time Adversarial Attacks against Reinforcement Learning

  162. Composite Backdoor Attack for Deep Neural Network by Mixing Existing Benign Features

  163. SPA: Stealthy Poisoning Attack

  164. Backdoor Attack with Sample-Specific Triggers

  165. Explainability Matters: Backdoor Attacks on Medical Imaging

  166. Escaping Backdoor Attack Detection of Deep Learning

  167. Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks

  168. Poisoning Attacks on Cyber Attack Detectors for Industrial Control Systems

  169. Fair Detection of Poisoning Attacks in Federated Learning

  170. Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification*

  171. Stealthy Poisoning Attack on Certified Robustness

  172. Machine Learning with Electronic Health Records is vulnerable to Backdoor Trigger Attacks

  173. Data Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses

  174. Detection of Backdoors in Trained Classifiers Without Access to the Training Set

  175. TROJANZOO: Everything you ever wanted to know about neural backdoors(but were afraid to ask)

  176. HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios

  177. DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation

  178. Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder

  179. Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff

  180. BaFFLe: Backdoor detection via Feedback-based Federated Learning

  181. Detecting Backdoors in Neural Networks Using Novel Feature-Based Anomaly Detection

  182. Mitigating Backdoor Attacks in Federated Learning

  183. FaceHack: Triggering backdoored facial recognition systems using facial characteristics

  184. Customizing Triggers with Concealed Data Poisoning

  185. Backdoor Learning: A Survey

  186. Rethinking the Trigger of Backdoor Attack

  187. AEGIS: Exposing Backdoors in Robust Machine Learning Models

  188. Weight Poisoning Attacks on Pre-trained Models

  189. Poisoned classifiers are not only backdoored, they are fundamentally broken

  190. Input-Aware Dynamic Backdoor Attack

  191. Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing

  192. BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models

  193. Don’t Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks

  194. Toward Robustness and Privacy in Federated Learning: Experimenting with Local and Central Differential Privacy

  195. CLEANN: Accelerated Trojan Shield for Embedded Neural Networks

  196. Witches’ Brew: Industrial Scale Data Poisoning via Gradient Matching

  197. Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks

  198. Can Adversarial Weight Perturbations Inject Neural Backdoors?

  199. Trojaning Language Models for Fun and Profit

  200. Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases

  201. Class-Oriented Poisoning Attack

  202. Noise-response Analysis for Rapid Detection of Backdoors in Deep Neural Networks

  203. Cassandra: Detecting Trojaned Networks from Adversarial Perturbations

  204. Backdoor Learning: A Survey

  205. Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review

  206. Live Trojan Attacks on Deep Neural Networks

  207. Odyssey: Creation, Analysis and Detection of Trojan Models

  208. Data Poisoning Attacks Against Federated Learning Systems

  209. Blind Backdoors in Deep Learning Models

  210. Deep Learning Backdoors

  211. Attack of the Tails: Yes, You Really Can Backdoor Federated Learning

  212. Backdoor Attacks on Facial Recognition in the Physical World

  213. Graph Backdoor

  214. Backdoor Attacks to Graph Neural Networks

  215. You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion

  216. Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks

  217. Trembling triggers: exploring the sensitivity of backdoors in DNN-based face recognition

  218. Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks

  219. Adversarial Machine Learning -- Industry Perspectives

  220. ConFoc: Content-Focus Protection Against Trojan Attacks on Neural Networks

  221. Model-Targeted Poisoning Attacks: Provable Convergence and Certified Bounds

  222. Deep Partition Aggregation: Provable Defense against General Poisoning Attacks

  223. The TrojAI Software Framework: An OpenSource tool for Embedding Trojans into Deep Learning Models*

  224. Influence Function based Data Poisoning Attacks to Top-N Recommender Systems

  225. BadNL: Backdoor Attacks Against NLP Models

    Summary
    • Introduces first example of backdoor attacks against NLP models using Char-level, Word-level, and Sentence-level triggers (these different triggers operate on the level of their descriptor)
      • Word-level trigger picks a word from the target model’s dictionary and uses it as a trigger
      • Char-level trigger uses insertion, deletion or replacement to modify a single character in a chosen word’s location (with respect to the sentence, for instance, at the start of each sentence) as the trigger.
      • Sentence-level trigger changes the grammar of the sentence and use this as the trigger
    • Authors impose an additional constraint that requires inserted triggers to not change the sentiment of text input
    • Proposed backdoor attack achieves 100% backdoor accuracy with only a drop of 0.18%, 1.26%, and 0.19% in the models utility, for the IMDB, Amazon, and Stanford Sentiment Treebank datasets
  226. Neural Network Calculator for Designing Trojan Detectors*

  227. Dynamic Backdoor Attacks Against Machine Learning Models

  228. Vulnerabilities of Connectionist AI Applications: Evaluation and Defence

  229. Backdoor Attacks on Federated Meta-Learning

  230. Defending Support Vector Machines against Poisoning Attacks: the Hardness and Algorithm

  231. Backdoors in Neural Models of Source Code

  232. A new measure for overfitting and its implications for backdooring of deep learning

  233. An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks

  234. MetaPoison: Practical General-purpose Clean-label Data Poisoning

  235. Backdooring and Poisoning Neural Networks with Image-Scaling Attacks

  236. Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability

  237. On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping

  238. A Survey on Neural Trojans

  239. STRIP: A Defence Against Trojan Attacks on Deep Neural Networks

    Summary
    • Authors introduce a run-time based trojan detection system called STRIP or STRong Intentional Pertubation which focuses on models in computer vision
    • STRIP works by intentionally perturbing incoming inputs (ie. by image blending) and then measuring entropy to determine whether the model is trojaned or not. Low entropy violates the input-dependance assumption for a clean model and thus indicates corruption
    • Authors validate STRIPs efficacy on MNIST,CIFAR10, and GTSRB acheiveing false acceptance rates of below 1%
  240. TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents

  241. Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection

  242. Regula Sub-rosa: Latent Backdoor Attacks on Deep Neural Networks

  243. Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems

  244. TBT: Targeted Neural Network Attack with Bit Trojan

  245. Bypassing Backdoor Detection Algorithms in Deep Learning

  246. A backdoor attack against LSTM-based text classification systems

  247. Invisible Backdoor Attacks Against Deep Neural Networks

  248. Detecting AI Trojans Using Meta Neural Analysis

  249. Label-Consistent Backdoor Attacks

  250. Detection of Backdoors in Trained Classifiers Without Access to the Training Set

  251. ABS: Scanning neural networks for back-doors by artificial brain stimulation

  252. NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations

  253. Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs

  254. Programmable Neural Network Trojan for Pre-Trained Feature Extractor

  255. Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection

  256. TamperNN: Efficient Tampering Detection of Deployed Neural Nets

  257. TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems

  258. Design of intentional backdoors in sequential models

  259. Design and Evaluation of a Multi-Domain Trojan Detection Method on ins Neural Networks

  260. Poison as a Cure: Detecting & Neutralizing Variable-Sized Backdoor Attacks in Deep Neural Networks

  261. Data Poisoning Attacks on Stochastic Bandits

  262. Hidden Trigger Backdoor Attacks

  263. Deep Poisoning Functions: Towards Robust Privacy-safe Image Data Sharing

  264. A new Backdoor Attack in CNNs by training set corruption without label poisoning

  265. Deep k-NN Defense against Clean-label Data Poisoning Attacks

  266. Transferable Clean-Label Poisoning Attacks on Deep Neural Nets

  267. Revealing Backdoors, Post-Training, in DNN Classifiers via Novel Inference on Optimized Perturbations Inducing Group Misclassification

  268. Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

  269. Subpopulation Data Poisoning Attacks

  270. TensorClog: An imperceptible poisoning attack on deep neural network applications

  271. DeepInspect: A black-box trojan detection and mitigation framework for deep neural networks

  272. Resilience of Pruned Neural Network Against Poisoning Attack

  273. Spectrum Data Poisoning with Adversarial Deep Learning

  274. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks

  275. SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems

    Summary
    • Authors develop SentiNet detection framework for locating universal attacks on neural networks
    • SentiNet is ambivalent to the attack vectors and uses model visualization / object detection techniques to extract potential attacks regions from the models input images. The potential attacks regions are identified as being the parts that influence the prediction the most. After extraction, SentiNet applies these regions to benign inputs and uses the original model to analyze the output
    • Authors stress test the SentiNet framework on three different types of attacks— data poisoning attacks, Trojan attacks, and adversarial patches. They are able to show that the framework achieves competitive metrics across all of the attacks (average true positive rate of 96.22% and an average true negative rate of 95.36%)
  276. PoTrojan: powerful neural-level trojan designs in deep learning models

  277. Hardware Trojan Attacks on Neural Networks

  278. Spectral Signatures in Backdoor Attacks

    Summary
    • Identified a "spectral signatures" property of current backdoor attacks which allows the authors to use robust statistics to stop Trojan attacks
    • The "spectral signature" refers to a change in the covariance spectrum of learned feature representations that is left after a network is attacked. This can be detected by using singular value decomposition (SVD). SVD is used to identify which examples to remove from the training set. After these examples are removed the model is retrained on the cleaned dataset and is no longer Trojaned. The authors test this method on the CIFAR 10 image dataset.
  279. Defending Neural Backdoors via Generative Distribution Modeling

  280. Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering

    Summary
    • Proposes Activation Clustering approach to backdoor detection/ removal which analyzes the neural network activations for anomalies and works for both text and images
    • Activation Clustering uses dimensionality techniques (ICA, PCA) on the activations and then clusters them using k-means (k=2) along with a silhouette score metric to separate poisoned from clean clusters
    • Shows that Activation Clustering is successful on three different image/datasets (MNIST, LISA, Rotten Tomatoes) as well as in settings where multiple Trojans are inserted and classes are multi-modal
  281. Model-Reuse Attacks on Deep Learning Systems

  282. How To Backdoor Federated Learning

  283. Trojaning Attack on Neural Networks

  284. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

    Summary
    • Proposes neural network poisoning attack that uses "clean labels" which do not require the adversary to mislabel training inputs
    • The paper also presents a optimization based method for generating their poisoning attacks and provides a watermarking strategy for end-to-end attacks that improves the poisoning reliability
    • Authors demonstrate their method by using generated poisoned frog images from the CIFAR dataset to manipulate different kinds of image classifiers
  285. Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks

    Summary
    • Investigate two potential detection methods for backdoor attacks (Fine-tuning and pruning). They find both are insufficient on their own and thus propose a combined detection method which they call "Fine-Pruning"
    • Authors go on to show that on three backdoor techniques "Fine-Pruning" is able to eliminate or reduce Trojans on datasets in the traffic sign, speech, and face recognition domains
  286. Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks

  287. Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation

  288. Hu-Fu: Hardware and Software Collaborative Attack Framework against Neural Networks

  289. Attack Strength vs. Detectability Dilemma in Adversarial Machine Learning

  290. Data Poisoning Attacks in Contextual Bandits

  291. BEBP: An Poisoning Method Against Machine Learning Based IDSs

  292. Generative Poisoning Attack Method Against Neural Networks

  293. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain

    Summary
    • Introduce Trojan Attacks— a type of attack where an adversary can create a maliciously trained network (a backdoored neural network, or a BadNet) that has state-of-the-art performance on the user’s training and validation samples, but behaves badly on specific attacker-chosen inputs
    • Demonstrate backdoors in a more realistic scenario by creating a U.S. street sign classifier that identifies stop signs as speed limits when a special sticker is added to the stop sign
  294. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

  295. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

  296. Neural Trojans

  297. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

  298. Certified defenses for data poisoning attacks

  299. Data Poisoning Attacks on Factorization-Based Collaborative Filtering

  300. Data poisoning attacks against autoregressive models

  301. Using machine teaching to identify optimal training-set attacks on machine learners

  302. Poisoning Attacks against Support Vector Machines

  303. Backdoor Attacks against Learning Systems

  304. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

  305. Antidote: Understanding and defending against poisoning of anomaly detectors

More Repositories

1

macos_security

macOS Security Compliance Project
YAML
1,603
star
2

800-63-3

Home to public development of NIST Special Publication 800-63-3: Digital Authentication Guidelines
CSS
699
star
3

OSCAL

Open Security Controls Assessment Language (OSCAL)
XSLT
572
star
4

fipy

FiPy is a Finite Volume PDE solver written in Python
Python
430
star
5

jarvis

JARVIS-Tools: an open-source software package for data-driven atomistic materials design. Publications: https://scholar.google.com/citations?user=3w6ej94AAAAJ
Python
279
star
6

jsip

JSIP: Java SIP specification Reference Implementation (moved from java.net)
Java
277
star
7

frvt

Repository for the Face Recognition Vendor Test (FRVT)
C++
259
star
8

trec_eval

Evaluation software used in the Text Retrieval Conference
C
224
star
9

oscal-content

NIST SP 800-53 content and other OSCAL content examples
Shell
218
star
10

alignn

Atomistic Line Graph Neural Network https://scholar.google.com/citations?user=9Q-tNnwAAAAJ&hl=en
Python
192
star
11

SP800-90B_EntropyAssessment

The SP800-90B_EntropyAssessment C++package implements the min-entropy assessment methods included in Special Publication 800-90B.
C++
189
star
12

SCTK

C
187
star
13

PrivacyEngCollabSpace

Privacy Engineering Collaboration Space
Python
186
star
14

REFPROP-wrappers

Wrappers around NIST REFPROP for languages such as Python, MATLAB, etc.
Mathematica
160
star
15

ACVP

Industry Working Group on Automated Cryptographic Algorithm Validation
HTML
151
star
16

mobile-threat-catalogue

NIST/NCCoE Mobile Threat Catalogue
HTML
141
star
17

NFIQ2

Optical live-scan and ink fingerprint image quality assessment tool
C++
127
star
18

MIST

Microscopy Image Stitching Tool
Java
120
star
19

applesec

Draft SP 800-179r1 macOS 10.12 Security project files: draft publication, security settings spreadsheet and Bash script implementation of settings.
Shell
116
star
20

ndn-dpdk

NDN-DPDK: High-Speed Named Data Networking Forwarder
Go
114
star
21

SFA

The NIST STEP File Analyzer and Viewer (SFA) generates a spreadsheet and a visualization from an ISO 10303 Part 21 STEP file.
Tcl
109
star
22

ARIAC

Repository for ARIAC (Agile Robotics for Industrial Automation Competition), consisting of kit building and assembly in a simulated warehouse
C++
104
star
23

NEMO

NEMO is a laboratory logistics web application. Use it to schedule reservations, control tool access, track maintenance issues, and more.
Python
98
star
24

jsfive

A pure javascript HDF5 reader
JavaScript
92
star
25

h5wasm

A WebAssembly HDF5 reader/writer library
C++
81
star
26

pyMCR

pyMCR: Multivariate Curve Resolution for Python
Python
79
star
27

Metrology

Metrology for software; software for metrology
JavaScript
65
star
28

psc-ns3

Public Safety Communication modeling tools based on ns-3
C++
62
star
29

STP2X3D

Translator from STEP format to X3D format
C++
62
star
30

combinatorial-testing-tools

Tools for combinatorial testing developed by the NIST ACTS project
Java
61
star
31

chemnlp

ChemNLP: A Natural Language Processing based Library for Materials Chemistry Text Data
Python
59
star
32

jarvis_leaderboard

Explore State-of-the-Art Materials Design Methods: https://www.nature.com/articles/s41524-024-01259-w
Jupyter Notebook
52
star
33

COSMOSAC

A Benchmark Implementation of COSMO-SAC
HTML
48
star
34

pfhub

The CHiMaD Phase Field Community Website
HTML
48
star
35

Lightweight-Cryptography-Benchmarking

C
48
star
36

SimulatedRadarWaveformGenerator

A software tool that generates simulated radar signals and creates RF datasets for developing and testing machine/deep learning detection algorithms.
MATLAB
47
star
37

REFPROP-cmake

Small repo with CMake build system for building REFPROP shared library
CMake
46
star
38

iheos-toolkit2

XDS Toolkit
Java
44
star
39

OpenSeadragonFiltering

OpenSeadragon filtering plugin
JavaScript
44
star
40

dioptra

Test Software for the Characterization of AI Technologies
Python
43
star
41

pmml_pymcBN

Jupyter Notebook
42
star
42

teqp

A highly efficient, flexible, and accurate implementation of thermodynamic EOS powered by automatic differentiation
C++
42
star
43

ActEV_Scorer

Scoring software for the TRECVID Activities in Extended Video (ActEV) evaluation
Python
41
star
44

HTGS

The Hybrid Task Graph Scheduler API
C++
40
star
45

sctools

Tools for security content automation, baseline tailoring, and overlay development.
HTML
39
star
46

hiperc

High Performance Computing Strategies for Boundary Value Problems
HTML
39
star
47

ocr-pipeline

Convert a corpus of PDF to clean text files on a distributed architecture
Python
38
star
48

OpenSeadragonScalebar

OpenSeadragon scalebar plugin
JavaScript
37
star
49

mosaic

A modular single-molecule analysis interface
Python
37
star
50

oscal-cli

A simple open source command line tool to support common operations over OSCAL content.
Java
37
star
51

ACVP-Server

A repository tracking releases of NIST's ACVP server. See www.github.com/usnistgov/ACVP for the protocol.
C#
36
star
52

vulntology

Development of the NIST vulnerability data ontology (Vulntology).
JavaScript
36
star
53

pyPRISM

A framework for conducting polymer reference interaction site model (PRISM) calculations
Python
35
star
54

DT4SM

Digital Thread for Smart Manufacturing
C#
34
star
55

OOF3D

Object Oriented for Finite Elements 3D version code.
Python
34
star
56

hugo-uswds

Implementation of the The United States Web Design System (USWDS) 2.0 using the Hugo open-source static site generator
SCSS
33
star
57

rcslib

NIST Real-Time Control Systems Library including Posemath, NML communications & Java Plotter
Java
33
star
58

PrivacyFrmwkResources

This repository contains resources to support organizations’ use of the Privacy Framework. Resources include crosswalks, Profiles, guidelines, and tools. NIST encourages new contributions and feedback on these resources as part of the ongoing collaborative effort to improve implementation of the Privacy Framework.
33
star
59

dataplot

Source code and auxiliary files for dataplot.
Fortran
32
star
60

oscal-tools

Tools for the OSCAL project
XSLT
32
star
61

pyramidio

Image pyramid reader and writer
Java
31
star
62

Voting

The NIST Voting Program repository
31
star
63

800-63-4

CSS
31
star
64

metaschema

Documentation for and implementations of the metaschema modeling language
Shell
31
star
65

MDCS

CSS
31
star
66

SDNist

SDNist: Benchmark data and evaluation tools for data synthesizers.
HTML
30
star
67

phasefield-precipitate-aging

Phase field model for precipitate aging in ternary analogues to Ni-based superalloys
Cuda
30
star
68

pySCATMECH

pySCATMECH is a Python interface to SCATMECH: Polarized Light Scattering C++ Class Library
C++
30
star
69

AGA8

Files associated with the AGA8 standard
Rust
30
star
70

feasst

The Free Energy and Advanced Sampling Simulation Toolkit (FEASST) is a free, open-source, modular program to conduct molecular and particle-based simulations with flat-histogram Monte Carlo methods.
C++
29
star
71

NetSimulyzer-ns3-module

A flexible 3D visualizer for displaying, debugging, presenting, and understanding ns-3 scenarios.
C++
28
star
72

liboscal-java

A Java library to support processing OSCAL content
Java
28
star
73

OFDM-GAN

Python
28
star
74

lantern

Interpretable genotype-phenotype landscape modeling
Python
28
star
75

ChebTools

C++ tools for working with Chebyshev expansion interpolants
C++
27
star
76

MediScore

Scoring tools for Media Forensics Evaluations
HTML
27
star
77

hedgehog

C++
27
star
78

NetSimulyzer

A flexible 3D visualizer for displaying, debugging, presenting, and understanding ns-3 scenarios.
C++
27
star
79

atomvision

Deep learning framework for atomistic image data
Python
26
star
80

REFPROP-issues

A repository solely used for reporting issues with NIST REFPROP
26
star
81

SCATMECH

SCATMECH: Polarized light scattering C++ class library
C++
26
star
82

youbot

Robotic platform for industrial control systems cybersecurity research. We use the research-grade Youbot as the robotics platform for our research. The ROS framework is used for inter-process communication, and Python is the language used for application development.
Python
26
star
83

ThreeBodyTB.jl

Accurate and fast tight-binding calculations, using pre-fit coefficients and three-body terms.
Julia
25
star
84

Circuits

Circuits for functions of interest to cryptography
C++
25
star
85

OOF2

Object Oriented for Finite Elements 2D version.
C++
25
star
86

F4DE

Framework for Detection Evaluation (F4DE) : set of evaluation tools for detection evaluations and for specific NIST-coordinated evaluations
Perl
24
star
87

optbayesexpt

Optimal Bayesian Experiment Design
Python
24
star
88

blockmatrix

This project is developing code to implement features and extensions to the NIST Cybersecurity Whitepaper, "A Data Structure for Integrity Protection with Erasure Capability". The block matrix data structure may have utility for incorporation into applications requiring integrity protection that currently use permissioned blockchains. This capability could for example be useful in meeting privacy requirements such as the European Union General Data Protection Regulation (GDPR), which requires that organizations make it possible to delete all information related to a particular individual, at that person's request.
Java
24
star
89

libbiomeval

Software components for biometric technology evaluations.
C++
24
star
90

ElectionResultsReporting

Common data format specification for election results reporting data
23
star
91

oscal-deep-diff

Open Security Controls Assessment Language (OSCAL) Deep Differencing Tool
TypeScript
22
star
92

IFA

The NIST IFC File Analyzer (IFA) generates a spreadsheet from an IFC file.
Tcl
22
star
93

ns3-oran

A module that can be used to model and simulate O-RAN-like behavior in ns-3.
C++
22
star
94

MUD-PD

A tool for characterizing the network behavior of IoT Devices. The primary intended use is to assist in the generation of allowlist files formatted according to the Manufacturer Usage Description specification.
Python
21
star
95

texture

Python scripts for analysis of crystallographic texture
Jupyter Notebook
21
star
96

trojai-example

Example TrojAI Submission
21
star
97

blossom-case-study

A case study for ACSAC 2022 utilizing OSCAL with a custom GitHub action to automate assessments.
HTML
21
star
98

BiometricEvaluation

NIST Image Group Biometric Repositories
20
star
99

WIPP

Web Image Processing Pipeline (WIPP)
Shell
20
star
100

CastVoteRecords

Common data format specification for cast vote records
19
star