There are no reviews yet. Be the first to send feedback to the community and the maintainers!
perceptionCLIP
Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"WAVES
Code for our paper "Benchmarking the Robustness of Image Watermarks"VLM-Poisoning
Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"WocaR-RL
Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement LearningDynamics-Aware-Robust-Training
ICLR 2023 paper "Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness" by Yuancheng Xu, Yanchao Sun, Micah Goldblum, Tom Goldstein and Furong HuangARMA-Networks
paad_adv_rl
Code for ICLR 2022 publication: Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL. https://openreview.net/forum?id=JM2kFbJvvIprivate-topic-model-tensor-methods
We provide an end-to-end differentially pri- vate spectral algorithm for learning LDA, based on matrix/tensor decompositions, and establish theoretical guarantees on util- ity/consistency of the estimated model pa- rameters. The spectral algorithm consists of multiple algorithmic steps, named as “edges”, to which noise could be injected to obtain differential privacy. We identify subsets of edges, named as “configurations”, such that adding noise to all edges in such a subset guarantees differential privacy of the end-to-end spectral algorithm. We character- ize the sensitivity of the edges with respect to the input and thus estimate the amount of noise to be added to each edge for any required privacy level. We then character- ize the utility loss for each configuration as a function of injected noise. Overall, by com- bining the sensitivity and utility characteri- zation, we obtain an end-to-end differentially private spectral algorithm for LDA and iden- tify the corresponding configuration that out- performs others in any specific regime. We are the first to achieve utility guarantees un- der the required level of differential privacy for learning in LDA. Overall our method sys- tematically outperforms differentially private variational inference.transfer_across_obs
Code for paper "Transfer RL across Observation Feature Spaces via Model-Based Regularization". https://openreview.net/forum?id=7KdAoOsI81Ctransfer-fairness
A self-training method for transferring fairness under distribution shifts.Tensorial-Neural-Networks
We implement tensorial neural networks (TNNs), a generalization of existing neural networks by extending tensor operations on low order operands to those on high order operands.template-reinforcement-learning
cmarl_ame
Implementation of ICLR'23 publication "Certifiably Robust Policy Learning against Adversarial Multi-Agent Communication".FalseRefusal
Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language ModelsPROTECTED
Code for paper "Beyond Worst-case Attacks: Robust RL with Adaptive Defense via Non-dominated Policies" by Xiangyu Liu, Chenghao Deng, Yanchao Sun, Yongyuan Liang, Furong Huangneural-net-generalization-via-tensor
Deep neural networks generalize well on unseen data though the number of parameters often far exceeds the number of training examples. Recently proposed complexity measures have provided insights to understanding the generalizability in neural networks from perspectives of PAC-Bayes, robustness, overparametrization, compression and so on. In this work, we advance the understanding of the relations between the network's architecture and its generalizability from the compression perspective. Using tensor analysis, we propose a series of intuitive, data-dependent and easily-measurable properties that tightly characterize the compressibility and generalizability of neural networks; thus, in practice, our generalization bound outperforms the previous compression-based ones, especially for neural networks using tensors as their weight kernels (e.g. CNNs). Moreover, these intuitive measurements provide further insights into designing neural network architectures with properties favorable for better/guaranteed generalizability. Our experimental results demonstrate that through the proposed measurable properties, our generalization error bound matches the trend of the test error well. Our theoretical analysis further provides justifications for the empirical success and limitations of some widely-used tensor-based compression approaches. We also discover the improvements to the compressibility and robustness of current neural networks when incorporating tensor operations via our proposed layer-wise structure.poison-rl
Code for paper Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown Dynamics. https://arxiv.org/abs/2009.00774parallel-tnn
Love Open Source and this site? Check out how you can help us