There are no reviews yet. Be the first to send feedback to the community and the maintainers!
advertorch
A Toolbox for Adversarial Robustness Researchnoise_flow
Noise Flow: Noise Modeling with Conditional Normalizing Flowsprivate-data-generation
A toolbox for differentially private data generationscaleformer
SLAPS-GNN
PyTorch code of "SLAPS: Self-Supervision Improves Structure Learning for Graph Neural Networks"de-simple
Diachronic Embedding for Temporal Knowledge Graph Completionflora-opt
This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.continuous-time-flow-process
PyTorch code of "Modeling Continuous Stochastic Processes with Dynamic Normalizing Flows" (NeurIPS 2020)ranksim-imbalanced-regression
[ICML 2022] RankSim: Ranking Similarity Regularization for Deep Imbalanced Regressionlite_tracer
a light weight experiment reproducibility toolsetpommerman-baseline
Code for the paper "Skynet: A Top Deep RL Agent in the Inaugural Pommerman Team Competition"mma_training
Code for the paper "MMA Training: Direct Input Space Margin Maximization through Adversarial Training"TSC-Disc-Proto
Discriminative Prototypes learned by Dynamic Time Warping (DTW) for Time Series Classification (TSC)MMoEEx-MTL
PyTorch Implementation of the Multi-gate Mixture-of-Experts with Exclusivity (MMoEEx)CP-VAE
On Variational Learning of Controllable Representations for Text without Supervision https://arxiv.org/abs/1905.11975cross_domain_coherence
A Cross-Domain Transferable Neural Coherence Model https://arxiv.org/abs/1905.11912bre-gan
Code for ICLR2018 paper: Improving GAN Training via Binarized Representation Entropy (BRE) Regularization - Y. Cao · W Ding · Y.C. Lui · R. HuangDT-Fixup
Optimizing Deeper Transformers on Small Datasets https://arxiv.org/abs/2012.15355rate_distortion
Evaluating Lossy Compression Rates of Deep Generative ModelsPROVIDE
PROVIDE: A Probabilistic Framework for Unsupervised Video Decomposition (UAI 2021)efficient-vit-training
PyTorch code of "Training a Vision Transformer from scratch in less than 24 hours with 1 GPU" (HiTY workshop at Neurips 2022)continuous-latent-process-flows
Code, data, and pre-trained models for the paper "Continuous Latent Process Flows" (NeurIPS 2021)code-gen-TAE
Code generation from natural language with less prior and more monolingual datassl-for-timeseries
Self Supervised Learning for Time Series Using Similarity DistillationOOS-KGE
PyTorch code of “Out-of-Sample Representation Learning for Multi-Relational Graphs” (EMNLP 2020)ConR
Contrastive Regularizernflow-cdf-approximations
Official implementation of "Efficient CDF Approximations for Normalizing Flows"IMLE
Code for differentially private Implicit Maximum Likelihood Estimation modelkeyphrase-generation
PyTorch code of “Diverse Keyphrase Generation with Neural Unlikelihood Training” (COLING 2020)towards-better-sel-cls
latent-bottlenecked-anp
BMI
Better Long-Range Dependency By Bootstrapping A Mutual Information Regularizer https://arxiv.org/abs/1905.11978StayPositive
tree-cross-attention
eval_dr_by_wsd
Evaluating quality of dimensionality reduction map with Wasserstein distancesautocast-plus-plus
[ICLR'24] AutoCast++: Enhancing World Event Prediction with Zero-shot Ranking-based Context Retrievalperturbed-forgetting
Training SAM, GSAM, ASAM with standard and OBF perturbationsgroup-feature-importance
Group feature importanceProbForest
Differentiable relaxations of tree-based models.raps
Code for the paper "Causal Bandits without Graph Learning"meta-tpp
PyTorch-Lightning implementation of Meta Temporal Point Processessasrec-ccql
PyTorch code of "Robust Reinforcement Learning Objectives for Sequential Recommender Systems"adaflood
monotonicity-mixup
Code of "Not Too Close and Not Too Far: Enforcing Monotonicity Requires Penalizing The Right Points"robust-gan
On Minimax Optimality of GANs for Robust Mean EstimationDynaShare-MTL
PyTorch Implementation of DynaShare: Task and Instance Conditioned Parameter Sharing for Multi-Task Learningdcf
Love Open Source and this site? Check out how you can help us