• Stars
    star
    2,301
  • Rank 19,193 (Top 0.4 %)
  • Language
    Python
  • License
    BSD 3-Clause "New...
  • Created almost 6 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

The Natural Language Decathlon: A Multitask Challenge for NLP

decaNLP Logo

Build Status

The Natural Language Decathlon is a multitask challenge that spans ten tasks: question answering (SQuAD), machine translation (IWSLT), summarization (CNN/DM), natural language inference (MNLI), sentiment analysis (SST), semantic role labeling(QA‑SRL), zero-shot relation extraction (QA‑ZRE), goal-oriented dialogue (WOZ, semantic parsing (WikiSQL), and commonsense reasoning (MWSC). Each task is cast as question answering, which makes it possible to use our new Multitask Question Answering Network (MQAN). This model jointly learns all tasks in decaNLP without any task-specific modules or parameters in the multitask setting. For a more thorough introduction to decaNLP and the tasks, see the main website, our blog post, or the paper.

While the research direction associated with this repository focused on multitask learning, the framework itself is designed in a way that should make single-task training, transfer learning, and zero-shot evaluation simple. Similarly, the paper focused on multitask learning as a form of question answering, but this framework can be easily adapted for different approaches to single-task or multitask learning.

Leaderboard

Model decaNLP SQuAD IWSLT CNN/DM MNLI SST QA‑SRL QA‑ZRE WOZ WikiSQL MWSC
MQAN(Sampling+CoVe) 609.0 77.0 21.4 24.4 74.0 86.5 80.9 40.9 84.8 70.2 48.8
MQAN(QA‑first+CoVe) 599.9 75.5 18.9 24.4 73.6 86.4 80.8 37.4 85.8 68.5 48.8
MQAN(QA‑first) 590.5 74.4 18.6 24.3 71.5 87.4 78.4 37.6 84.8 64.8 48.7
S2S 513.6 47.5 14.2 25.7 60.9 85.9 68.7 28.5 84.0 45.8 52.4

Getting Started

GPU vs. CPU

The devices argument can be used to specify the devices for training. For CPU training, specify --devices -1; for GPU training, specify --devices DEVICEID. Note that Multi-GPU training is currently a WIP, so --device is sufficient for commands below. The default will be to train on GPU 0 as training on CPU will be quite time-consuming to train on all ten tasks in decaNLP.

If you want to use CPU, then remove the nvidia- and the cuda9_ prefixes from the default commands listed in sections below. This will allow you to use Docker without CUDA.

For example, if you have CUDA and all the necessary drivers and GPUs, you you can run a command inside the CUDA Docker image using:

nvidia-docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) bmccann/decanlp:cuda9_torch041 bash -c "COMMAND --device 0"

If you want to run the same command without CUDA:

docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) bmccann/decanlp:torch041 bash -c "COMMAND --device -1"

For those in the Docker know, you can look at the Dockerfiles used to build these two images in dockerfiles/.

PyTorch Version

The research associated with the original paper was done using Pytorch 0.3, but we have since migrated to 0.4. If you want to replicate results from the paper, then to be safe, you should use the code at a commit on or before 3c4f94b88768f4c3efc2fd4f015fed2f5453ebce. You should also replace toch041 with torch03 in the commands below to access a Docker image with the older version of PyTorch.

Training

For example, to train a Multitask Question Answering Network (MQAN) on the Stanford Question Answering Dataset (SQuAD) on GPU 0:

nvidia-docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) bmccann/decanlp:cuda9_torch041 bash -c "python /decaNLP/train.py --train_tasks squad --device 0"

To multitask with the fully joint, round-robin training described in the paper, you can add multiple tasks:

nvidia-docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) bmccann/decanlp:cuda9_torch041 bash -c "python /decaNLP/train.py --train_tasks squad iwslt.en.de --train_iterations 1 --device 0"

To train on the entire Natural Language Decathlon:

nvidia-docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) bmccann/decanlp:cuda9_torch041 bash -c "python /decaNLP/train.py --train_tasks squad iwslt.en.de cnn_dailymail multinli.in.out sst srl zre woz.en wikisql schema --train_iterations 1 --device 0"

To pretrain on n_jump_start=1 tasks for jump_start=75000 iterations before switching to round-robin sampling of all tasks in the Natural Language Decathlon:

nvidia-docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) bmccann/decanlp:cuda9_torch041 bash -c "python /decaNLP/train.py --n_jump_start 1 --jump_start 75000 --train_tasks squad iwslt.en.de cnn_dailymail multinli.in.out sst srl zre woz.en wikisql schema --train_iterations 1 --device 0"

This jump starting (or pretraining) on a subset of tasks can be done for any set of tasks, not only the entirety of decaNLP.

Tensorboard

If you would like to make use of tensorboard, you can add the --tensorboard flag to your training runs. This will log things in the format that Tensorboard expects.

To read those files and run the Tensorboard server, run (typically in a tmux pane or equivalent so that the process is not killed when you shut your laptop) the following command:

docker run -it --rm -p 0.0.0.0:6006:6006 -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) bmccann/decanlp:cuda9_torch041 bash -c "tensorboard --logdir /decaNLP/results"

If you are running the server on a remote machine, you can run the following on your local machine to forward to http://localhost:6006/:

ssh -4 -N -f -L 6006:127.0.0.1:6006 YOUR_REMOTE_IP

If you are having trouble with the specified port on either machine, run lsof -if:6006 and kill the process if it is unnecessary. Otherwise, try changing the port numbers in the commands above. The first port number is the port the local machine tries to bind to, and and the second port is the one exposed by the remote machine (or docker container).

Notes on Training

  • On a single NVIDIA Volta GPU, the code should take about 3 days to complete 500k iterations. These should be sufficient to approximately reproduce the experiments in the paper. Training for about 7 days should be enough to fully replicate those scores, which should be only a few points higher than what is achieved by 500k iterations.
  • The model can be resumed using stored checkpoints using --load <PATH_TO_CHECKPOINT> and --resume. By default, models are stored every --save_every iterations in the results/ folder tree.
  • During training, validation can be slow! Especially when computing ROUGE scores. Use the --val_every flag to change the frequency of validation.
  • If you run out of GPU memory, reduce --train_batch_tokens and --val_batch_size.
  • If you run out of CPU memory, make sure that you are running the most recent version of the code that interns strings; if you are still running out of CPU memory, post an issue with the command you ran and your peak memory usage.
  • The first time you run, the code will download and cache all considered datasets. Please be advised that this might take a while, especially for some of the larger datasets.

Notes on Cached Data

  • In order to make data loading much quicker for repeated experiments, datasets are cached using code in text/torchtext/datasets/generic.py.
  • If there is an update to this repository that touches any files in text/, then it might have changed the way a dataset is cached. If this is the case, then you'll need to delete all relevant cached files or you will not see the changes.
  • Paths to cached files should be printed out when a dataset is loaded, either in training or in prediction. Search the text logged to stdout for Loading cached data from or Caching data to in order to locate the relevant path names for data caches.

Evaluation

You can evaluate a model for a specific task with EVALUATION_TYPE as validation or test:

nvidia-docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) bmccann/decanlp:cuda9_torch041 bash -c "python /decaNLP/predict.py --evaluate EVALUATION_TYPE --path PATH_TO_CHECKPOINT_DIRECTORY --device 0 --tasks squad"

or evaluate on the entire decathlon by removing any task specification:

nvidia-docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) bmccann/decanlp:cuda9_torch041 bash -c "python /decaNLP/predict.py --evaluate EVALUATION_TYPE --path PATH_TO_CHECKPOINT_DIRECTORY --device 0"

For test performance, please use the original SQuAD, MultiNLI, and WikiSQL evaluation systems. For WikiSQL, there is a detailed walk-through of how to get test numbers in the section of this document concerning pretrained models.

Pretrained Models

This model is the best MQAN trained on decaNLP so far. It was trained first on SQuAD and then on all of decaNLP. It uses CoVe as well. You can obtain this model and run it on the validation sets with the following.

wget https://s3.amazonaws.com/research.metamind.io/decaNLP/pretrained/mqan_decanlp_better_sampling_cove_cpu.tgz
tar -xvzf mqan_decanlp_better_sampling_cove_cpu.tgz
nvidia-docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) bmccann/decanlp:cuda9_torch041 bash -c "python /decaNLP/predict.py --evaluate validation --path /decaNLP/mqan_decanlp_better_sampling_cove_cpu/ --checkpoint_name iteration_560000.pth --device 0 --silent"

This model is the best MQAN trained on WikiSQL alone, which established a new state-of-the-art performance by several points on that task: 73.2 / 75.4 / 81.4 (ordered test logical form accuracy, unordered test logical form accuracy, test execution accuracy).

wget https://s3.amazonaws.com/research.metamind.io/decaNLP/pretrained/mqan_wikisql_cpu.tar.gz
tar -xvzf mqan_wikisql_cpu.tar.gz
nvidia-docker run -it --rm -v `pwd`:/decaNLP/  bmccann/decanlp:cuda9_torch041 -c "python /decaNLP/predict.py --evaluate validation --path /decaNLP/mqan_wikisql_cpu --checkpoint_name iteration_57000.pth --device 0 --tasks wikisql"
nvidia-docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) bmccann/decanlp:cuda9_torch041 bash -c "python /decaNLP/predict.py --evaluate test --path /decaNLP/mqan_wikisql_cpu --checkpoint_name iteration_57000.pth --device 0 --tasks wikisql"
docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) bmccann/decanlp:cuda9_torch041 bash -c "python /decaNLP/convert_to_logical_forms.py /decaNLP/.data/ /decaNLP/mqan_wikisql_cpu/iteration_57000/validation/wikisql.txt /decaNLP/mqan_wikisql_cpu/iteration_57000/validation/wikisql.ids.txt /decaNLP/mqan_wikisql_cpu/iteration_57000/validation/wikisql_logical_forms.jsonl valid"
docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) bmccann/decanlp:cuda9_torch041 bash -c "python /decaNLP/convert_to_logical_forms.py /decaNLP/.data/ /decaNLP/mqan_wikisql_cpu/iteration_57000/test/wikisql.txt /decaNLP/mqan_wikisql_cpu/iteration_57000/test/wikisql.ids.txt /decaNLP/mqan_wikisql_cpu/iteration_57000/test/wikisql_logical_forms.jsonl test"
git clone https://github.com/salesforce/WikiSQL.git #[email protected]:salesforce/WikiSQL.git for ssh
docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) bmccann/decanlp:cuda9_torch041 bash -c "python /decaNLP/WikiSQL/evaluate.py /decaNLP/.data/wikisql/data/dev.jsonl /decaNLP/.data/wikisql/data/dev.db /decaNLP/mqan_wikisql_cpu/iteration_57000/validation/wikisql_logical_forms.jsonl" # assumes that you have data stored in .data
docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) bmccann/decanlp:cuda9_torch041 bash -c "python /decaNLP/WikiSQL/evaluate.py /decaNLP/.data/wikisql/data/test.jsonl /decaNLP/.data/wikisql/data/test.db /decaNLP/mqan_wikisql_cpu/iteration_57000/test/wikisql_logical_forms.jsonl" # assumes that you have data stored in .data

You can similarly follow the instructions above for downloading, decompressing, and loading in pretrained models for other indivual tasks (single-task models):

wget https://s3.amazonaws.com/research.metamind.io/decaNLP/pretrained/squad_mqan_cove_cpu.tgz
wget https://s3.amazonaws.com/research.metamind.io/decaNLP/pretrained/cnn_dailymail_mqan_cove_cpu.tgz
wget https://s3.amazonaws.com/research.metamind.io/decaNLP/pretrained/iwslt.en.de_mqan_cove_cpu.tgz
wget https://s3.amazonaws.com/research.metamind.io/decaNLP/pretrained/sst_mqan_cove_cpu.tgz
wget https://s3.amazonaws.com/research.metamind.io/decaNLP/pretrained/multinli.in.out_mqan_cove_cpu.tgz
wget https://s3.amazonaws.com/research.metamind.io/decaNLP/pretrained/woz.en_mqan_cove_cpu.tgz
wget https://s3.amazonaws.com/research.metamind.io/decaNLP/pretrained/srl_mqan_cove_cpu.tgz
wget https://s3.amazonaws.com/research.metamind.io/decaNLP/pretrained/zre_mqan_cove_cpu.tgz
wget https://s3.amazonaws.com/research.metamind.io/decaNLP/pretrained/schema_mqan_cove_cpu.tgz

Inference on a Custom Dataset

Using a pretrained model or a model you have trained yourself, you can run on new, custom datasets easily by following the instructions below. In this example, we use the checkpoint for the best MQAN trained on the entirety of decaNLP (see the section on Pretrained Models to see how to get this checkpoint) to run on my_custom_dataset.

mkdir -p .data/my_custom_dataset/
touch .data/my_custom_dataset/val.jsonl
echo '{"context": "The answer is answer.", "question": "What is the answer?", "answer": "answer"}' >> .data/my_custom_dataset/val.jsonl 
# TODO add your own examples line by line to val.jsonl in the form of a JSON dictionary, as demonstrated above.
# Make sure to delete the first line if you don't want the demonstrated example.
nvidia-docker run -it --rm -v `pwd`:/decaNLP/ -u $(id -u):$(id -g) bmccann/decanlp:cuda9_torch041 bash -c "python /decaNLP/predict.py --evaluate valid --path /decaNLP/mqan_decanlp_qa_first_cpu --checkpoint_name iteration_1140000.pth --tasks my_custom_dataset"

You should get output that ends with something like this:

**  /decaNLP/mqan_decanlp_qa_first_cpu/iteration_1140000/valid/my_custom_dataset.txt  already exists -- this is where predictions are stored **
**  /decaNLP/mqan_decanlp_qa_first_cpu/modeltion_1140000/valid/my_custom_dataset.gold.txt  already exists -- this is where ground truth answers are stored **
**  /decaNLP/mqan_decanlp_qa_first_cpu/modeltion_1140000/valid/my_custom_dataset.results.txt  already exists -- this is where metrics are stored **
{"em":0.0,"nf1":100.0,"nem":100.0}

{'em': 0.0, 'nf1': 100.0, 'nem': 100.0}
Prediction: the answer
Answer: answer

From this output, you can see where predictions are stored along with ground truth outputs and metrics. If you want to rerun using this model checkpoint on this particular dataset, you'll need to pass the --overwrite_predictions argument to predict.py. If you do not want predictions and answers printed to stdout, then pass the --silent argument to predict.py.

The metrics dictionary should have printed something like {'em': 0.0, 'nf1': 100.0, 'nem': 100.0}. Here em stands for exact match. This is the percentage of predictions that had every token match the ground truth answer exactly. The normalized version, nem, lowercases and strips punctuation -- all of our models are trained on lowercased data, so nem is a more accurate representation of performance than em for our models. For tasks that are typically treated as classification problems, these exact match scores should correspond to accuracy. nf1 is a normalized (lowercased; punctuation stripped) F1 score over the predicted and ground truth sequences. If you would like to add additional metrics that are already implemented you can try adding --bleu (the typical metric for machine translation) and --rouge (the typical metric for summarization). Other metrics can be implemented following the patterns in metrics.py.

Citation

If you use this in your work, please cite The Natural Language Decathlon: Multitask Learning as Question Answering.

@article{McCann2018decaNLP,
  title={The Natural Language Decathlon: Multitask Learning as Question Answering},
  author={Bryan McCann and Nitish Shirish Keskar and Caiming Xiong and Richard Socher},
  journal={arXiv preprint arXiv:1806.08730},
  year={2018}
}

Contact

Contact: [email protected] and [email protected]

More Repositories

1

LAVIS

LAVIS - A One-stop Library for Language-Vision Intelligence
Jupyter Notebook
8,226
star
2

CodeGen

CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.
Python
4,594
star
3

BLIP

PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Jupyter Notebook
3,879
star
4

akita

🚀 State Management Tailored-Made for JS Applications
TypeScript
3,442
star
5

Merlion

Merlion: A Machine Learning Framework for Time Series Intelligence
Python
3,232
star
6

ja3

JA3 is a standard for creating SSL client fingerprints in an easy to produce and shareable way.
Python
2,502
star
7

CodeT5

Home of CodeT5: Open Code LLMs for Code Understanding and Generation
Python
2,437
star
8

TransmogrifAI

TransmogrifAI (pronounced trăns-mŏgˈrə-fī) is an AutoML library for building modular, reusable, strongly typed machine learning workflows on Apache Spark with minimal hand-tuning
Scala
2,227
star
9

policy_sentry

IAM Least Privilege Policy Generator
Python
1,938
star
10

awd-lstm-lm

LSTM and QRNN Language Model Toolkit for PyTorch
Python
1,900
star
11

cloudsplaining

Cloudsplaining is an AWS IAM Security Assessment tool that identifies violations of least privilege and generates a risk-prioritized report.
JavaScript
1,865
star
12

ctrl

Conditional Transformer Language Model for Controllable Generation
Python
1,766
star
13

lwc

⚡️ LWC - A Blazing Fast, Enterprise-Grade Web Components Foundation
JavaScript
1,537
star
14

WikiSQL

A large annotated semantic parsing corpus for developing natural language interfaces.
HTML
1,520
star
15

sloop

Kubernetes History Visualization
Go
1,396
star
16

CodeTF

CodeTF: One-stop Transformer Library for State-of-the-art Code LLM
Python
1,375
star
17

ALBEF

Code for ALBEF: a new vision-language pre-training method
Python
1,276
star
18

pytorch-qrnn

PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM
Python
1,255
star
19

ai-economist

Foundation is a flexible, modular, and composable framework to model socio-economic behaviors and dynamics with both agents and governments. This framework can be used in conjunction with reinforcement learning to learn optimal economic policies, as done by the AI Economist (https://www.einstein.ai/the-ai-economist).
Python
964
star
20

jarm

Python
914
star
21

design-system-react

Salesforce Lightning Design System for React
JavaScript
896
star
22

tough-cookie

RFC6265 Cookies and CookieJar for Node.js
TypeScript
858
star
23

reactive-grpc

Reactive stubs for gRPC
Java
814
star
24

OmniXAI

OmniXAI: A Library for eXplainable AI
Jupyter Notebook
782
star
25

xgen

Salesforce open-source LLMs with 8k sequence length.
Python
704
star
26

vulnreport

Open-source pentesting management and automation platform by Salesforce Product Security
HTML
593
star
27

UniControl

Unified Controllable Visual Generation Model
Python
577
star
28

hassh

HASSH is a network fingerprinting standard which can be used to identify specific Client and Server SSH implementations. The fingerprints can be easily stored, searched and shared in the form of a small MD5 fingerprint.
Python
525
star
29

progen

Official release of the ProGen models
Python
518
star
30

Argus

Time series monitoring and alerting platform.
Java
501
star
31

base-components-recipes

A collection of base component recipes for Lightning Web Components on Salesforce Platform
JavaScript
496
star
32

matchbox

Write PyTorch code at the level of individual examples, then run it efficiently on minibatches.
Python
488
star
33

PCL

PyTorch code for "Prototypical Contrastive Learning of Unsupervised Representations"
Python
483
star
34

cove

Python
470
star
35

CodeRL

This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (NeurIPS22).
Python
465
star
36

DialogStudio

DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection and Instruction-Aware Models for Conversational AI
Python
431
star
37

warp-drive

Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning Framework on a GPU (JMLR 2022)
Python
429
star
38

observable-membrane

A Javascript Membrane implementation using Proxies to observe mutation on an object graph
TypeScript
368
star
39

PyRCA

PyRCA: A Python Machine Learning Library for Root Cause Analysis
Python
367
star
40

DeepTime

PyTorch code for Learning Deep Time-index Models for Time Series Forecasting (ICML 2023)
Python
322
star
41

ULIP

Python
316
star
42

logai

LogAI - An open-source library for log analytics and intelligence
Python
298
star
43

MultiHopKG

Multi-hop knowledge graph reasoning learned via policy gradient with reward shaping and action dropout
Jupyter Notebook
290
star
44

CodeGen2

CodeGen2 models for program synthesis
Python
269
star
45

provis

Official code repository of "BERTology Meets Biology: Interpreting Attention in Protein Language Models."
Python
269
star
46

jaxformer

Minimal library to train LLMs on TPU in JAX with pjit().
Python
255
star
47

EDICT

Jupyter Notebook
247
star
48

causalai

Salesforce CausalAI Library: A Fast and Scalable framework for Causal Analysis of Time Series and Tabular Data
Jupyter Notebook
223
star
49

ETSformer

PyTorch code for ETSformer: Exponential Smoothing Transformers for Time-series Forecasting
Python
221
star
50

themify

👨‍🎨 CSS Themes Made Easy. A robust, opinionated solution to manage themes in your web application
TypeScript
216
star
51

rules_spring

Bazel rule for building Spring Boot apps as a deployable jar
Starlark
215
star
52

simpletod

Official repository for "SimpleTOD: A Simple Language Model for Task-Oriented Dialogue"
Python
212
star
53

TabularSemanticParsing

Translating natural language questions to a structured query language
Jupyter Notebook
210
star
54

grpc-java-contrib

Useful extensions for the grpc-java library
Java
208
star
55

GeDi

GeDi: Generative Discriminator Guided Sequence Generation
Python
207
star
56

aws-allowlister

Automatically compile an AWS Service Control Policy that ONLY allows AWS services that are compliant with your preferred compliance frameworks.
Python
207
star
57

mirus

Mirus is a cross data-center data replication tool for Apache Kafka
Java
200
star
58

generic-sidecar-injector

A generic framework for injecting sidecars and related configuration in Kubernetes using Mutating Webhook Admission Controllers
Go
200
star
59

CoST

PyTorch code for CoST: Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series Forecasting (ICLR 2022)
Python
196
star
60

factCC

Resources for the "Evaluating the Factual Consistency of Abstractive Text Summarization" paper
Python
192
star
61

runway-browser

Interactive visualization framework for Runway models of distributed systems
JavaScript
188
star
62

glad

Global-Locally Self-Attentive Dialogue State Tracker
Python
186
star
63

ALPRO

Align and Prompt: Video-and-Language Pre-training with Entity Prompts
Python
177
star
64

densecap

Jupyter Notebook
176
star
65

cloud-guardrails

Rapidly apply hundreds of security controls in Azure
HCL
174
star
66

booksum

Python
167
star
67

kafka-junit

This library wraps Kafka's embedded test cluster, allowing you to more easily create and run integration tests using JUnit against a "real" kafka server running within the context of your tests. No need to stand up an external kafka cluster!
Java
166
star
68

sfdx-lwc-jest

Run Jest against LWC components in SFDX workspace environment
JavaScript
156
star
69

ctrl-sum

Resources for the "CTRLsum: Towards Generic Controllable Text Summarization" paper
Python
144
star
70

cos-e

Commonsense Explanations Dataset and Code
Python
143
star
71

hierarchicalContrastiveLearning

Python
140
star
72

secure-filters

Anti-XSS Security Filters for EJS and More
JavaScript
138
star
73

metabadger

Prevent SSRF attacks on AWS EC2 via automated upgrades to the more secure Instance Metadata Service v2 (IMDSv2).
Python
129
star
74

dockerfile-image-update

A tool that helps you get security patches for Docker images into production as quickly as possible without breaking things
Java
127
star
75

Converse

Python
125
star
76

refocus

The Go-To Platform for Visualizing Service Health
JavaScript
125
star
77

CoMatch

Code for CoMatch: Semi-supervised Learning with Contrastive Graph Regularization
Python
117
star
78

BOLAA

Python
114
star
79

bazel-eclipse

This repo holds two IDE projects. One is the Eclipse Feature for developing Bazel projects in Eclipse. The Bazel Eclipse Feature supports importing, building, and testing Java projects that are built using the Bazel build system. The other is the Bazel Java Language Server, which is a build integration for IDEs such as VS Code.
Java
108
star
80

botsim

BotSIM - a data-efficient end-to-end Bot SIMulation toolkit for evaluation, diagnosis, and improvement of commercial chatbots
Jupyter Notebook
108
star
81

near-membrane

JavaScript Near Membrane Library that powers Lightning Locker Service
TypeScript
107
star
82

rng-kbqa

Python
105
star
83

MUST

PyTorch code for MUST
Python
103
star
84

fsnet

Python
101
star
85

bro-sysmon

How to Zeek Sysmon Logs!
Zeek
101
star
86

Timbermill

A better logging service
Java
99
star
87

best

🏆 Delightful Benchmarking & Performance Testing
TypeScript
95
star
88

eslint-config-lwc

Opinionated ESLint configurations for LWC projects
JavaScript
93
star
89

craft

CRAFT removes the language barrier to create Kubernetes Operators.
Go
91
star
90

AuditNLG

AuditNLG: Auditing Generative AI Language Modeling for Trustworthiness
Python
90
star
91

online_conformal

Methods for online conformal prediction.
Jupyter Notebook
90
star
92

lobster-pot

Scans every git push to your Github organisations to find unwanted secrets.
Go
88
star
93

violet-conversations

Sophisticated Conversational Applications/Bots
JavaScript
84
star
94

ml4ir

Machine Learning for Information Retrieval
Jupyter Notebook
84
star
95

apex-mockery

Lightweight mocking library in Apex
Apex
83
star
96

fast-influence-functions

Python
80
star
97

MoPro

MoPro: Webly Supervised Learning
Python
79
star
98

TaiChi

Open source library for few shot NLP
Python
79
star
99

helm-starter-istio

An Istio starter template for Helm
Shell
78
star
100

QAConv

This repository maintains the QAConv dataset, a question-answering dataset on informative conversations including business emails, panel discussions, and work channels.
Python
77
star