• Stars
    star
    157
  • Rank 237,091 (Top 5 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created about 3 years ago
  • Updated about 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"

Prompt Tuning

This is the pytorch implementation of The Power of Scale for Parameter-Efficient Prompt Tuning.

Currently, we support the following huggigface models:

  • GPT2LMModel

Usage

See example.ipynb for more details.

from model import GPT2PromptTuningLM

# number of prompt tokens
n_prompt_tokens = 20
# If True, soft prompt will be initialized from vocab 
# Otherwise, you can set `random_range` to initialize by randomization.
init_from_vocab = True
# random_range = 0.5

# Initialize GPT2LM with soft prompt
model = GPT2PromptTuningLM.from_pretrained(
    "gpt2",
    n_tokens=n_prompt_tokens,
    initialize_from_vocab=init_from_vocab
)

Reference