• Stars
    star
    207
  • Rank 189,769 (Top 4 %)
  • Language
    Python
  • License
    BSD 3-Clause "New...
  • Created about 4 years ago
  • Updated about 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

GeDi: Generative Discriminator Guided Sequence Generation

GeDi logo

Official implementation of GeDi: Generative Discriminator Guided Sequence Generation

Blogpost here

Colab Notebook on controlling topic using GeDi here

Open In Colab

Updates

Sept 29, 2020: Adding support for GeDi-guided GPT-3 generation (API key needed)

Introduction

GeDi is a method of using class-conditional language models (which we refer to as generative discriminators (GeDis)) to guide generation from other (potentially much larger) language models. This has several advantages over finetuning large language models directly including:

  • significantly less training computation.
  • maintaining the diversity of the original language model (If we finetune a large pretrained language model to a specific attribute dataset, we will likely reduce the broad generation capabilities of the model).
  • teaching the language model what not to generate. This is especially useful for applications like detoxification.

GeDi is a form of discriminator guided generation. A discriminator that can classify an attribute could be used to guide language model generation towards that attribute by classifying the sequences that result from candidate next tokens. However, using a normal discriminator (such as BERT) to do this would be very computationally expensive during generation, since it would require feeding in every candidate next token one-by-one to the discriminator to be classified. However, using generative discriminators, we can very efficiently classify candidate next tokens during generation using Bayes rule (see Section 3.1 of the paper). As an added bonus, generative discriminators can be used as zero shot classifiers, and can therefore be used to guide generation towards unseen topics.

Dependencies

  • Python 3.7, PyTorch 1.4 (We recommend creating a container using the pytorch/pytorch:1.4-cuda10.1-cudnn7-devel official pytorch docker image.)

  • Run scripts/setup.sh:

    cd scripts
    bash setup.sh
    

    This will install the following:

    • Transformers v2.8.0 and its example-specific requirements mentioned here
    • (optional) Apex (details here) for fp16 training and generation (installing apex takes a while; comment corresponding lines in setup.sh if you want to skip)
    • wget and unzip (to download and unzip data and model checkpoints)

Generating from models used in paper

  • First download the models:
cd scripts
bash get_models.sh

This downloads and saves the topic, sentiment, and detoxifier models in the folder ../pretrained_models

  • To generate, use bash run_generation.sh, which calls ../generate_GeDi.py with the appropriate arguments (set for topic generation by default).

Important arguments include:

  • --mode can be set to topic, sentiment, or detoxify
  • --gen_type can be set to gedi for GeDi guided generation, cclm for class conditional generation, or gpt2 to generate from raw GPT-2
  • --gen_length max length of generation
  • --gedi_model_name_or_path path to GeDi model. If unused, will assume you ran bash get_models.sh and infer model directory from --mode argument
  • --filter_p equal to 1 - \rho in Equation 7 of the paper
  • --target_p equal to \tau from the paper
  • --disc_weight exponent for posterior weighting (\omega in Equation 6 of the paper)
  • --fp16 converts GPT2-XL weights to fp16 for faster generation and less GPU memory usage

Running will allow you to enter control codes and prompts for generation in a continuous loop until you exit.

Topic generation (Section 6.3 & 6.4 of the paper)

  • Set --mode topic in scripts/run_generation.sh
  • You will be prompted to give a topic code. The model was trained on world, sports, business, and science, but can often generate other topics zero-shot, for instance space, fire, climate, education
  • If the topic code you give is more than one BPE token, the model often struggles because the 4 training topics were all 1 BPE token. You will be warned that this might not work, but can proceed by hitting enter again (or can type a new topic code).
  • After the topic code, you will be asked to give a prompt to the model to condition on for generation.

Sentiment control (Section 6.1 of the paper)

  • Set --mode sentiment in scripts/run_generation.sh
  • The model can controllably generate positive or negative text. When generalizing to other domains such as stories, this often translates to positive/negative mood or tone of the story (since sentiment implies an opinion).
  • The model is set to positive sentiment by default. You will be prompted for the opportunity to change to negative sentiment by typing n. Note that the negative model can be very negative, and this sometimes results in toxic or offensive samples.
  • You will then be asked to give a prompt to the model to condition on for generation.

Detoxication (Section 6.2 of the paper)

  • Set --mode detoxify in scripts/run_generation.sh
  • This mode can be used to avoid generating toxic or offensive text.
  • You will then be asked to give a prompt to the model to condition on for generation.
  • GeDi can often find a way to navigate especially aggressive prompts, but does rarely but occasionally still generate toxic text if given certain prompts. We observed this can be a problem for longer generations.

Class-conditional LM and GPT-2 generation

  • Two of the baselines we consider are generating from GPT-2 (will give same result regardless of control codes), and generating from the GeDi model directly as a class-conditional language model (instead of using it to guide generation from GPT-2).
  • Set --gen_type gpt2 to generate from GPT-2, and --gen_type cclm to generate directly from the GeDi as a class-conditional language model. --gen_type cclm corresponds to all experiments in Section 5 of the paper, and the CC-LM baselines in Section 6.1.

GPT-3 generation (added after paper, API access needed)

  • If you have your own GPT-3 API secret key, you can use GeDi to guide decoding from GPT-3.
  • This is somewhat limited, since the GPT-3 API only allow access to the top 100 next token log probabilities.
  • Reuses settings for controlling GPT-2 (which uses all next token log probs), retuning for GPT-3 could give better results.
  • It is also slow (up to 1 second per token) because modifying GPT-3 decoding requires calling the API one token at a time.

To control sentiment from GPT-3 using your API key (should have prefix "sk"):

pip install openai

python ../generate_GeDi.py --penalize_cond --gen_length 100 --mode sentiment --gpt3_api_key sk-xxxxxxxx

You can also try changing the --mode or other arguments. To generate directly from GPT-3 without GeDi using our same greedy decoding scheme:

python ../generate_GeDi.py --penalize_cond --gen_length 100 --mode sentiment --gen_type gpt2 --gpt3_api_key sk-xxxxxxx

Train your own GeDi

  • This repository includes code to train a topic GeDi using GeDi training.
  • There are some differences in this training script and the one used to train the pretrained model. The pretrained model only used half of AG news, and there were some slight differences in preprocessing.
  • This runs in about 5 hours on a 16GB V100 GPU on GCP.
  • First, download and process the topic data:
cd scripts
bash get_data.sh
  • Then run training using:

bash run_training.sh which calls ../train_GeDi.py with the appropriate arguments

  • The directory for model to be saved is specified by output_dir argument.
  • When generating from your trained GeDi, you will need to call ../generate_GeDi.py (called from bash run_generation.sh) with --gedi_model_name_or_path set to the directory of your trained model.

Citation

@article{KrauseGeDi2020,
  title={{GeDi: Generative Discriminator Guided Sequence Generation}},
  author={Krause, Ben and Gotmare, Akhilesh Deepak and McCann, Bryan and Keskar, Nitish Shirish and Joty, Shafiq and Socher, Richard and Rajani, Nazneen Fatema},
  journal={arXiv preprint arXiv:2009.06367},
  year={2020}
}

License

The code is released under the BSD-3 License (see LICENSE.txt for details), but we also ask that users respect the following:

This software should not be used to promote or profit from violence, hate, and division, environmental destruction, abuse of human rights, or the destruction of people's physical and mental health.

More Repositories

1

LAVIS

LAVIS - A One-stop Library for Language-Vision Intelligence
Jupyter Notebook
9,587
star
2

CodeGen

CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.
Python
4,594
star
3

BLIP

PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Jupyter Notebook
3,879
star
4

akita

🚀 State Management Tailored-Made for JS Applications
TypeScript
3,442
star
5

Merlion

Merlion: A Machine Learning Framework for Time Series Intelligence
Python
3,355
star
6

ja3

JA3 is a standard for creating SSL client fingerprints in an easy to produce and shareable way.
Python
2,666
star
7

CodeT5

Home of CodeT5: Open Code LLMs for Code Understanding and Generation
Python
2,437
star
8

decaNLP

The Natural Language Decathlon: A Multitask Challenge for NLP
Python
2,301
star
9

TransmogrifAI

TransmogrifAI (pronounced trăns-mŏgˈrə-fī) is an AutoML library for building modular, reusable, strongly typed machine learning workflows on Apache Spark with minimal hand-tuning
Scala
2,234
star
10

policy_sentry

IAM Least Privilege Policy Generator
Python
1,986
star
11

cloudsplaining

Cloudsplaining is an AWS IAM Security Assessment tool that identifies violations of least privilege and generates a risk-prioritized report.
JavaScript
1,972
star
12

awd-lstm-lm

LSTM and QRNN Language Model Toolkit for PyTorch
Python
1,900
star
13

ctrl

Conditional Transformer Language Model for Controllable Generation
Python
1,766
star
14

lwc

⚡️ LWC - A Blazing Fast, Enterprise-Grade Web Components Foundation
JavaScript
1,619
star
15

WikiSQL

A large annotated semantic parsing corpus for developing natural language interfaces.
HTML
1,606
star
16

sloop

Kubernetes History Visualization
Go
1,457
star
17

CodeTF

CodeTF: One-stop Transformer Library for State-of-the-art Code LLM
Python
1,375
star
18

ALBEF

Code for ALBEF: a new vision-language pre-training method
Python
1,276
star
19

pytorch-qrnn

PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM
Python
1,255
star
20

ai-economist

Foundation is a flexible, modular, and composable framework to model socio-economic behaviors and dynamics with both agents and governments. This framework can be used in conjunction with reinforcement learning to learn optimal economic policies, as done by the AI Economist (https://www.einstein.ai/the-ai-economist).
Python
964
star
21

design-system-react

Salesforce Lightning Design System for React
JavaScript
919
star
22

jarm

Python
914
star
23

tough-cookie

RFC6265 Cookies and CookieJar for Node.js
TypeScript
858
star
24

OmniXAI

OmniXAI: A Library for eXplainable AI
Jupyter Notebook
853
star
25

reactive-grpc

Reactive stubs for gRPC
Java
826
star
26

xgen

Salesforce open-source LLMs with 8k sequence length.
Python
717
star
27

UniControl

Unified Controllable Visual Generation Model
Python
614
star
28

vulnreport

Open-source pentesting management and automation platform by Salesforce Product Security
HTML
593
star
29

hassh

HASSH is a network fingerprinting standard which can be used to identify specific Client and Server SSH implementations. The fingerprints can be easily stored, searched and shared in the form of a small MD5 fingerprint.
Python
529
star
30

progen

Official release of the ProGen models
Python
518
star
31

base-components-recipes

A collection of base component recipes for Lightning Web Components on Salesforce Platform
JavaScript
509
star
32

Argus

Time series monitoring and alerting platform.
Java
501
star
33

CodeRL

This is the official code for the paper CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning (NeurIPS22).
Python
488
star
34

matchbox

Write PyTorch code at the level of individual examples, then run it efficiently on minibatches.
Python
488
star
35

PCL

PyTorch code for "Prototypical Contrastive Learning of Unsupervised Representations"
Python
483
star
36

DialogStudio

DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection and Instruction-Aware Models for Conversational AI
Python
472
star
37

cove

Python
470
star
38

warp-drive

Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning Framework on a GPU (JMLR 2022)
Python
452
star
39

PyRCA

PyRCA: A Python Machine Learning Library for Root Cause Analysis
Python
408
star
40

observable-membrane

A Javascript Membrane implementation using Proxies to observe mutation on an object graph
TypeScript
368
star
41

DeepTime

PyTorch code for Learning Deep Time-index Models for Time Series Forecasting (ICML 2023)
Python
351
star
42

ULIP

Python
316
star
43

MultiHopKG

Multi-hop knowledge graph reasoning learned via policy gradient with reward shaping and action dropout
Jupyter Notebook
300
star
44

logai

LogAI - An open-source library for log analytics and intelligence
Python
298
star
45

CodeGen2

CodeGen2 models for program synthesis
Python
272
star
46

provis

Official code repository of "BERTology Meets Biology: Interpreting Attention in Protein Language Models."
Python
269
star
47

causalai

Salesforce CausalAI Library: A Fast and Scalable framework for Causal Analysis of Time Series and Tabular Data
Jupyter Notebook
256
star
48

jaxformer

Minimal library to train LLMs on TPU in JAX with pjit().
Python
255
star
49

EDICT

Jupyter Notebook
247
star
50

rules_spring

Bazel rule for building Spring Boot apps as a deployable jar
Starlark
224
star
51

ETSformer

PyTorch code for ETSformer: Exponential Smoothing Transformers for Time-series Forecasting
Python
221
star
52

TabularSemanticParsing

Translating natural language questions to a structured query language
Jupyter Notebook
220
star
53

themify

👨‍🎨 CSS Themes Made Easy. A robust, opinionated solution to manage themes in your web application
TypeScript
216
star
54

simpletod

Official repository for "SimpleTOD: A Simple Language Model for Task-Oriented Dialogue"
Python
212
star
55

grpc-java-contrib

Useful extensions for the grpc-java library
Java
208
star
56

aws-allowlister

Automatically compile an AWS Service Control Policy that ONLY allows AWS services that are compliant with your preferred compliance frameworks.
Python
207
star
57

generic-sidecar-injector

A generic framework for injecting sidecars and related configuration in Kubernetes using Mutating Webhook Admission Controllers
Go
203
star
58

mirus

Mirus is a cross data-center data replication tool for Apache Kafka
Java
201
star
59

CoST

PyTorch code for CoST: Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series Forecasting (ICLR 2022)
Python
196
star
60

factCC

Resources for the "Evaluating the Factual Consistency of Abstractive Text Summarization" paper
Python
192
star
61

runway-browser

Interactive visualization framework for Runway models of distributed systems
JavaScript
188
star
62

glad

Global-Locally Self-Attentive Dialogue State Tracker
Python
186
star
63

cloud-guardrails

Rapidly apply hundreds of security controls in Azure
HCL
181
star
64

ALPRO

Align and Prompt: Video-and-Language Pre-training with Entity Prompts
Python
177
star
65

densecap

Jupyter Notebook
176
star
66

kafka-junit

This library wraps Kafka's embedded test cluster, allowing you to more easily create and run integration tests using JUnit against a "real" kafka server running within the context of your tests. No need to stand up an external kafka cluster!
Java
167
star
67

booksum

Python
167
star
68

sfdx-lwc-jest

Run Jest against LWC components in SFDX workspace environment
JavaScript
162
star
69

hierarchicalContrastiveLearning

Python
149
star
70

ctrl-sum

Resources for the "CTRLsum: Towards Generic Controllable Text Summarization" paper
Python
146
star
71

cos-e

Commonsense Explanations Dataset and Code
Python
144
star
72

secure-filters

Anti-XSS Security Filters for EJS and More
JavaScript
138
star
73

metabadger

Prevent SSRF attacks on AWS EC2 via automated upgrades to the more secure Instance Metadata Service v2 (IMDSv2).
Python
129
star
74

dockerfile-image-update

A tool that helps you get security patches for Docker images into production as quickly as possible without breaking things
Java
127
star
75

Converse

Python
125
star
76

refocus

The Go-To Platform for Visualizing Service Health
JavaScript
125
star
77

CoMatch

Code for CoMatch: Semi-supervised Learning with Contrastive Graph Regularization
Python
117
star
78

BOLAA

Python
114
star
79

fsnet

Python
111
star
80

rng-kbqa

Python
110
star
81

near-membrane

JavaScript Near Membrane Library that powers Lightning Locker Service
TypeScript
110
star
82

botsim

BotSIM - a data-efficient end-to-end Bot SIMulation toolkit for evaluation, diagnosis, and improvement of commercial chatbots
Jupyter Notebook
108
star
83

bazel-eclipse

This repo holds two IDE projects. One is the Eclipse Feature for developing Bazel projects in Eclipse. The Bazel Eclipse Feature supports importing, building, and testing Java projects that are built using the Bazel build system. The other is the Bazel Java Language Server, which is a build integration for IDEs such as VS Code.
Java
108
star
84

MUST

PyTorch code for MUST
Python
103
star
85

bro-sysmon

How to Zeek Sysmon Logs!
Zeek
100
star
86

Timbermill

A better logging service
Java
99
star
87

AuditNLG

AuditNLG: Auditing Generative AI Language Modeling for Trustworthiness
Python
97
star
88

eslint-plugin-lwc

Official ESLint rules for LWC
JavaScript
96
star
89

best

🏆 Delightful Benchmarking & Performance Testing
TypeScript
95
star
90

craft

CRAFT removes the language barrier to create Kubernetes Operators.
Go
93
star
91

eslint-config-lwc

Opinionated ESLint configurations for LWC projects
JavaScript
93
star
92

online_conformal

Methods for online conformal prediction.
Jupyter Notebook
90
star
93

lobster-pot

Scans every git push to your Github organisations to find unwanted secrets.
Go
88
star
94

ml4ir

Machine Learning for Information Retrieval
Jupyter Notebook
85
star
95

violet-conversations

Sophisticated Conversational Applications/Bots
JavaScript
84
star
96

apex-mockery

Lightweight mocking library in Apex
Apex
83
star
97

fast-influence-functions

Python
83
star
98

MoPro

MoPro: Webly Supervised Learning
Python
79
star
99

TaiChi

Open source library for few shot NLP
Python
79
star
100

helm-starter-istio

An Istio starter template for Helm
Shell
78
star