• Stars
    star
    7
  • Rank 2,294,772 (Top 46 %)
  • Language
    Python
  • Created over 2 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Code for ICLR 2022 publication: Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL. https://openreview.net/forum?id=JM2kFbJvvI

More Repositories

1

perceptionCLIP

Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"
Jupyter Notebook
76
star
2

WAVES

Code for our paper "Benchmarking the Robustness of Image Watermarks"
Python
48
star
3

VLM-Poisoning

Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"
Python
24
star
4

WocaR-RL

Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning
Python
22
star
5

Dynamics-Aware-Robust-Training

ICLR 2023 paper "Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness" by Yuancheng Xu, Yanchao Sun, Micah Goldblum, Tom Goldstein and Furong Huang
Python
21
star
6

ARMA-Networks

Python
8
star
7

private-topic-model-tensor-methods

We provide an end-to-end differentially pri- vate spectral algorithm for learning LDA, based on matrix/tensor decompositions, and establish theoretical guarantees on util- ity/consistency of the estimated model pa- rameters. The spectral algorithm consists of multiple algorithmic steps, named as “edges”, to which noise could be injected to obtain differential privacy. We identify subsets of edges, named as “configurations”, such that adding noise to all edges in such a subset guarantees differential privacy of the end-to-end spectral algorithm. We character- ize the sensitivity of the edges with respect to the input and thus estimate the amount of noise to be added to each edge for any required privacy level. We then character- ize the utility loss for each configuration as a function of injected noise. Overall, by com- bining the sensitivity and utility characteri- zation, we obtain an end-to-end differentially private spectral algorithm for LDA and iden- tify the corresponding configuration that out- performs others in any specific regime. We are the first to achieve utility guarantees un- der the required level of differential privacy for learning in LDA. Overall our method sys- tematically outperforms differentially private variational inference.
Python
5
star
8

transfer_across_obs

Code for paper "Transfer RL across Observation Feature Spaces via Model-Based Regularization". https://openreview.net/forum?id=7KdAoOsI81C
Python
3
star
9

transfer-fairness

A self-training method for transferring fairness under distribution shifts.
Python
3
star
10

Tensorial-Neural-Networks

We implement tensorial neural networks (TNNs), a generalization of existing neural networks by extending tensor operations on low order operands to those on high order operands.
Python
3
star
11

template-reinforcement-learning

Python
2
star
12

cmarl_ame

Implementation of ICLR'23 publication "Certifiably Robust Policy Learning against Adversarial Multi-Agent Communication".
Python
2
star
13

reinforcement-learning-via-spectral-methods

Model-based reinforcement learning algorithms make decisions by building and utilizing a model of the environment. However, none of the existing algorithms attempts to infer the dynamics of any state-action pair from known state-action pairs before meeting it for sufficient times. We propose a new model-based method called Greedy Inference Model (GIM) that infers the unknown dynamics from known dynamics based on the internal spectral properties of the environment. In other words, GIM can “learn by analogy”. We further introduce a new exploration strategy which ensures that the agent rapidly and evenly visits unknown state-action pairs. GIM is much more computationally efficient than state-of-the-art model-based algorithms, as the number of dynamic programming operations is independent of the environment size. Lower sample complexity could also be achieved under mild conditions compared against methods without inferring. Experimental results demon- strate the effectiveness and efficiency of GIM in a variety of real- world tasks.
Python
2
star
14

FalseRefusal

Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models
1
star
15

PROTECTED

Code for paper "Beyond Worst-case Attacks: Robust RL with Adaptive Defense via Non-dominated Policies" by Xiangyu Liu, Chenghao Deng, Yanchao Sun, Yongyuan Liang, Furong Huang
Python
1
star
16

neural-net-generalization-via-tensor

Deep neural networks generalize well on unseen data though the number of parameters often far exceeds the number of training examples. Recently proposed complexity measures have provided insights to understanding the generalizability in neural networks from perspectives of PAC-Bayes, robustness, overparametrization, compression and so on. In this work, we advance the understanding of the relations between the network's architecture and its generalizability from the compression perspective. Using tensor analysis, we propose a series of intuitive, data-dependent and easily-measurable properties that tightly characterize the compressibility and generalizability of neural networks; thus, in practice, our generalization bound outperforms the previous compression-based ones, especially for neural networks using tensors as their weight kernels (e.g. CNNs). Moreover, these intuitive measurements provide further insights into designing neural network architectures with properties favorable for better/guaranteed generalizability. Our experimental results demonstrate that through the proposed measurable properties, our generalization error bound matches the trend of the test error well. Our theoretical analysis further provides justifications for the empirical success and limitations of some widely-used tensor-based compression approaches. We also discover the improvements to the compressibility and robustness of current neural networks when incorporating tensor operations via our proposed layer-wise structure.
1
star
17

poison-rl

Code for paper Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown Dynamics. https://arxiv.org/abs/2009.00774
1
star
18

parallel-tnn

1
star