There are no reviews yet. Be the first to send feedback to the community and the maintainers!
advertorch
A Toolbox for Adversarial Robustness Researchnoise_flow
Noise Flow: Noise Modeling with Conditional Normalizing Flowsprivate-data-generation
A toolbox for differentially private data generationscaleformer
SLAPS-GNN
PyTorch code of "SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks"de-simple
Diachronic Embedding for Temporal Knowledge Graph Completioncontinuous-time-flow-process
PyTorch code of "Modeling Continuous Stochastic Processes with Dynamic Normalizing Flows" (NeurIPS 2020)ranksim-imbalanced-regression
[ICML 2022] RankSim: Ranking Similarity Regularization for Deep Imbalanced Regressionlite_tracer
a light weight experiment reproducibility toolsetpommerman-baseline
Code for the paper "Skynet: A Top Deep RL Agent in the Inaugural Pommerman Team Competition"mma_training
Code for the paper "MMA Training: Direct Input Space Margin Maximization through Adversarial Training"TSC-Disc-Proto
Discriminative Prototypes learned by Dynamic Time Warping (DTW) for Time Series Classification (TSC)MMoEEx-MTL
PyTorch Implementation of the Multi-gate Mixture-of-Experts with Exclusivity (MMoEEx)mtmfrl
Multi Type Mean Field Reinforcement LearningCP-VAE
On Variational Learning of Controllable Representations for Text without Supervision https://arxiv.org/abs/1905.11975cross_domain_coherence
A Cross-Domain Transferable Neural Coherence Model https://arxiv.org/abs/1905.11912bre-gan
Code for ICLR2018 paper: Improving GAN Training via Binarized Representation Entropy (BRE) Regularization - Y. Cao · W Ding · Y.C. Lui · R. HuangDT-Fixup
Optimizing Deeper Transformers on Small Datasets https://arxiv.org/abs/2012.15355flora-opt
This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.rate_distortion
Evaluating Lossy Compression Rates of Deep Generative ModelsPROVIDE
PROVIDE: A Probabilistic Framework for Unsupervised Video Decomposition (UAI 2021)continuous-latent-process-flows
Code, data, and pre-trained models for the paper "Continuous Latent Process Flows" (NeurIPS 2021)code-gen-TAE
Code generation from natural language with less prior and more monolingual dataOOS-KGE
PyTorch code of “Out-of-Sample Representation Learning for Multi-Relational Graphs” (EMNLP 2020)ssl-for-timeseries
Self Supervised Learning for Time Series Using Similarity Distillationefficient-vit-training
PyTorch code of "Training a Vision Transformer from scratch in less than 24 hours with 1 GPU" (HiTY workshop at Neurips 2022)latent-bottlenecked-anp
nflow-cdf-approximations
Official implementation of "Efficient CDF Approximations for Normalizing Flows"keyphrase-generation
PyTorch code of “Diverse Keyphrase Generation with Neural Unlikelihood Training” (COLING 2020)BMI
Better Long-Range Dependency By Bootstrapping A Mutual Information Regularizer https://arxiv.org/abs/1905.11978IMLE
Code for differentially private Implicit Maximum Likelihood Estimation modelStayPositive
towards-better-sel-cls
eval_dr_by_wsd
Evaluating quality of dimensionality reduction map with Wasserstein distancesgroup-feature-importance
Group feature importanceProbForest
Differentiable relaxations of tree-based models.ConR
Contrastive Regularizerrobust-gan
On Minimax Optimality of GANs for Robust Mean Estimationraps
Code for the paper "Causal Bandits without Graph Learning"Love Open Source and this site? Check out how you can help us