• Stars
    star
    103
  • Rank 333,046 (Top 7 %)
  • Language
    Python
  • Created about 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A framework for human-readable prompt-based method with large language models. Specially designed for researchers. (In progress)

HumanPrompt


HumanPrompt is a framework for easier human-in-the-loop design, manage, sharing, and usage of prompt and prompt methods. It is specially designed for researchers. It is still in progress๐Ÿ‘ถ, we highly welcome new contributions on methods and modules. Check out our proposal here.

Content

To start

Firstly, clone this repo, then run:

pip install -e .

This will install humanprompt package and add soft link hub to ./humanprompt/artifacts/hub.

Then you need to set some environmental variables like OpenAI API key:

export OPENAI_API_KEY = "YOUR_OPENAI_API_KEY"

Then, it depends on how you will use this repo. For now, this repo's mission is to help researchers on verifying their ideas. Therefore, we make it really flexible to extend and use.

A minimal example to run a method is as follows:

Our usage is quite simple, it is almost similar if you have used huggingface transformers before.

For example, use the Chain-of-Thought on CommonsenseQA:

from humanprompt.methods.auto.method_auto import AutoMethod
from humanprompt.tasks.dataset_loader import DatasetLoader

# Get one built-in method
method = AutoMethod.from_config(method_name="cot")

# Get one dataset, select one example for demo
data = DatasetLoader.load_dataset(dataset_name="commonsense_qa", dataset_split="test")
data_item = data[0]

# Adapt the raw data to the method's input format, (we will improve this part later)
data_item["context"] = "Answer choices: {}".format(
        " ".join(
            [
                "({}) {}".format(label.lower(), text.lower())
                for label, text in zip(
                data_item["choices"]["label"], data_item["choices"]["text"]
            )
            ]
        )
    )

# Run the method
result = method.run(data_item)
print(result)
print(data_item)

Zero-shot text2SQL:

import os
from humanprompt.methods.auto.method_auto import AutoMethod
from humanprompt.tasks.dataset_loader import DatasetLoader

method = AutoMethod.from_config("db_text2sql")
data = DatasetLoader.load_dataset(dataset_name="spider", dataset_split="validation")
data_item = data[0]

data_item["db"] = os.path.join(
data_item["db_path"], data_item["db_id"], data_item["db_id"] + ".sqlite"
)

result = method.run(data_item)
print(result)
print(data_item)

To accelerate your research

Config

We adopt "one config, one experiment" paradigm to facilitate research, especially when benchmarking different prompting methods. In each experiment's config file(.yaml) under examples/configs/, you can config the dataset, prompting method, and metrics.

Following is a config file example for Chain-of-Thought method on GSM8K:

---
  dataset:
    dataset_name: "gsm8k"                # dataset name, aligned with huggingface dataset if loaded from it
    dataset_split: "test"                # dataset split
    dataset_subset_name: "main"          # dataset subset name, null if not used
    dataset_key_map:                     # mapping original dataset keys to humanprompt task keys to unify the interface
      question: "question"
      answer: "answer"
  method:
    method_name: "cot"                   # method name to initialize the prompting method class
    method_config_file_path: null        # method config file path, null if not used(will be overriden by method_args).
    method_args:
      client_name: "openai"              # LLM API client name, adopted from github.com/HazyResearch/manifest
      transform: "cot.gsm8k.transform_cot_gsm8k.CoTGSM8KTransform"  # user-defined transform class to build the prompts
      extract: "cot.gsm8k.extract_cot_gsm8k.CoTGSM8KExtract"        # user-defined extract class to extract the answers from output
      extraction_regex: ".*The answer is (.*).\n?"                  # user-defined regex to extract the answer from output
      prompt_file_path: "cot/gsm8k/prompt.txt"                      # prompt file path
      max_tokens: 512                    # max generated tokens
      temperature: 0                     # temperature for generated tokens
      engine: code-davinci-002           # LLM engine
      stop_sequence: "\n\n"              # stop sequence for generation
  metrics:
    - "exact_match"                      # metrics to evaluate the results

Users can create the transform and extract classes to customize the prompt generation and answer extraction process. Prompt file can be replaced or specified according to the user's need.

Run experiment

To run experiments, you can specify the experiment name and other meta configs in command line under examples/ directory.

For example, run the following command to run Chain-of-Thought on GSM8K:

python run_experiment.py
  --exp_name cot-gsm8k
  --num_test_samples 300

For new combination of methods and tasks, you can simply add a new config file under examples/configs/ and run the command.

Architecture

.
โ”œโ”€โ”€ examples
โ”‚ย ย  โ”œโ”€โ”€ configs                    # config files for experiments
โ”‚ย ย  โ”œโ”€โ”€ main.py                    # one sample demo script
โ”‚ย ย  โ””โ”€โ”€ run_experiment.py          # experiment script
โ”œโ”€โ”€ hub                            # hub contains static files for methods and tasks
โ”‚ย ย  โ”œโ”€โ”€ cot                        # method Chain-of-Thought
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ gsm8k                  # task GSM8K, containing prompt file and transform/extract classes, etc.
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ ...
โ”‚ย ย  โ”œโ”€โ”€ ama_prompting              # method Ask Me Anything
โ”‚ย ย  โ”œโ”€โ”€ binder                     # method Binder
โ”‚ย ย  โ”œโ”€โ”€ db_text2sql                # method text2sql
โ”‚ย ย  โ”œโ”€โ”€ react                      # method ReAct
โ”‚ย ย  โ”œโ”€โ”€ standard                   # method standard prompting
โ”‚ย ย  โ””โ”€โ”€ zero_shot_cot              # method zero-shot Chain-of-Thought
โ”œโ”€โ”€ humanprompt                    # humanprompt package, containing building blocks for the complete prompting pipeline
โ”‚ย ย  โ”œโ”€โ”€ artifacts
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ artifact.py
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ hub
โ”‚ย ย  โ”œโ”€โ”€ components                 # key components for the prompting pipeline
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ aggregate              # aggregate classes to aggregate the answers
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ extract                # extract classes to extract the answers from output
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ post_hoc.py            # post-hoc processing
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ prompt.py              # prompt classes to build the prompts
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ retrieve               # retrieve classes to retrieve in-context examples
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ transform              # transform classes to transform the raw data to the method's input format
โ”‚ย ย  โ”œโ”€โ”€ evaluators                 # evaluators
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ evaluator.py           # evaluator class to evaluate the dataset results
โ”‚ย ย  โ”œโ”€โ”€ methods                    # prompting methods, usually one method is related to one paper
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ ama_prompting          # Ask Me Anything(https://arxiv.org/pdf/2210.02441.pdf)
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ binder                 # Binder(https://arxiv.org/pdf/2210.02875.pdf)
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ ...
โ”‚ย ย  โ”œโ”€โ”€ tasks                      # dataset loading and preprocessing
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ add_sub.py             # AddSub dataset
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ wikitq.py              # WikiTableQuestions dataset
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ ...
โ”‚ย ย  โ”œโ”€โ”€ third_party                # third party packages
โ”‚ย ย  โ””โ”€โ”€ utils                      # utils
โ”‚ย ย      โ”œโ”€โ”€ config_utils.py
โ”‚ย ย      โ””โ”€โ”€ integrations.py
โ””โ”€โ”€ tests                          # test scripts
    โ”œโ”€โ”€ conftest.py
    โ”œโ”€โ”€ test_datasetloader.py
    โ””โ”€โ”€ test_method.py

Contributing

This repository is designed for researchers to give a quick usages and easy manipulation of different prompt methods. We spent a lot of time on making it easy to extend and use, thus we hope you can contribute to this repo.

If you are interested in contributing your method into this framework, you can:

  1. Bring up an issue about your required method, and we will add it into our TODO list and implement as soon as possible.
  2. Add your method into humanprompt/methods folder yourself. To do that, you should follow the following steps:
    1. Clone the repo.
    2. Create a branch from main branch, named you methods.
    3. Commit your code into your branch, you need to:
      1. add code in ./humanprompt/methods, and add your method into ./humanprompt/methods/your_method_name folder,
      2. create a hub of your method in ./hub/your_method_name,
      3. make sure to have an ./examples folder in ./hub/your_method_name to config the basic usage this method,
      4. a minimal demo in ./examples for running and testing your method.
    4. Create a demo of usage in ./examples folder.
    5. Require a PR to merge your branch into main branch.
    6. We will handle the last few steps for you to make sure your method is well integrated into this framework.

Pre-commit

We use pre-commit to control the quality of code. Before you commit, make sure to run the code below to go over your code and fix the issues.

pip install pre-commit
pre-commit install # install all hooks
pre-commit run --all-files # trigger all hooks

You can use git commit --no-verify to skip and allow us to handle that later on.

Used by

Citation

If you find this repo useful, please cite our project and manifest:

@software{humanprompt,
  author = {Tianbao Xie and
            Zhoujun Cheng and
            Yiheng Xu and
            Peng Shi and
            Tao Yu},
  title = {A framework for human-readable prompt-based method with large language models},
  howpublished = {\url{https://github.com/hkunlp/humanprompt}},
  year = 2022,
  month = October
}
@misc{orr2022manifest,
  author = {Orr, Laurel},
  title = {Manifest},
  year = {2022},
  publisher = {GitHub},
  howpublished = {\url{https://github.com/HazyResearch/manifest}},
}

More Repositories

1

UnifiedSKG

[EMNLP 2022] A Unified Framework and Analysis for Structured Knowledge Grounding with Text-to-Text Language Models
Python
480
star
2

instructor-embedding

One Embedder, Any Task: Instruction-Finetuned Text Embeddings
Python
270
star
3

Binder

[ICLR 2023] Code for the paper "Binding Language Models in Symbolic Languages"
Python
161
star
4

DS-1000

[ICML 2023] Official data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".
Python
119
star
5

diffusion-of-thoughts

Code for the paper "Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models"
Python
61
star
6

icl-selective-annotation

[ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"
Python
57
star
7

efficient-attention

[EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling
Python
39
star
8

icl-ceil

Code for our paper โ€œCompositional Exemplars for In-context Learningโ€.
Python
34
star
9

reparam-discrete-diffusion

Reparameterized Discrete Diffusion Models for Text Generation
Python
30
star
10

STRING

Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"
Python
28
star
11

batch-prompting

A simple prompting approach that enables the LLMs to run inference in batches.
Python
24
star
12

subgoal-theorem-prover

Code for the paper "Decomposing the Enigma: Subgoal-based Demonstration Learning for Formal Theorem Proving"
16
star
13

ProGen

[EMNLP-2022 Findings] Code for paper โ€œProGen: Progressive Zero-shot Dataset Generation via In-context Feedbackโ€.
Python
12
star
14

hkunlp.github.io

Website for HKU NLP group (under construction)
JavaScript
9
star
15

diagrams_toolkit

Source code for diagrams in the paper of NLPers from HKU.
Python
4
star
16

ChunkLlama

Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"
4
star
17

SymGen

Code for Generating Data for Symbolic Language with Large Language Models
2
star
18

.github

2
star
19

diffusion-vs-ar

1
star