• Stars
    star
    176
  • Rank 215,758 (Top 5 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created 11 months ago
  • Updated 8 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

RuLES: a benchmark for evaluating rule-following in language models

Can LLMs Follow Simple Rules?

As of March 7 2024, we have updated the repo with a revised v2.0 benchmark with new test cases. Please see our updated paper for more details.

[demo] [website] [paper]

This repo contains the code for RuLES: Rule-following Language Evaluation Scenarios, a benchmark for evaluating rule-following in language models.

Setup

  1. Install as an editable package:
pip install -e .
  1. Create OpenAI/Anthropic/Google API keys and write them to a .env file:
OPENAI_API_KEY=<key>
ANTHROPIC_API_KEY=<key>
GOOGLE_API_KEY=<key>
  1. Download Llama-2 or other HuggingFace models to a local path using snapshot_download:
>>> from huggingface_hub import snapshot_download
>>> snapshot_download(repo_id="meta-llama/Llama-2-7b-chat-hf", local_dir="/my_models/Llama-2-7b-chat-hf", local_dir_use_symlinks=False)
  1. (Optional) Download and extract evaluation logs here to logs/.

Manual red teaming

Launch an interactive session with:

python scripts/manual_redteam.py --provider openai --model gpt-3.5-turbo-0613 --scenario Authentication --stream

Explore test cases

Visualize test cases with:

python scripts/show_testcases.py --test_dir data/redteam

Evaluation

Our main evaluation script is scripts/evaluate.py, but since we support lots of evaluation options the code may be hard to follow. Please see scripts/evaluate_simple.py for a simplified version of the evaluation script.

We wrap API calls with unlimited retries for ease of evaluation. You may want to change the retry functionality to suit your needs.

Evaluate on redteam test suite

python scripts/evaluate.py --provider openai --model gpt-3.5-turbo-0613 --test_dir data/redteam --output_dir logs/redteam

Evaluate a local model using vLLM (GPU required)

When evaluating models using vLLM, evaluate.py launches an API server in-process. Concurrency should be set much higher for vLLM models. Run evaluation with:

python scripts/evaluate.py --provider vllm --model /path/to/model --conv_template llama-2 --concurrency 100

Visualize evaluation results

View detailed results on a single test suite with:

python scripts/read_results.py --single_dir logs/redteam/gpt-3.5-turbo-0613

After evaluating on all three test suites (Benign, Basic, and Redteam), compute aggregate RuLES score with:

python scripts/read_scores.py --model_name gpt-3.5-turbo-0613

Finally, you can view responses to individual test casees with:

python scripts/show_responses.py --output_dir logs/redteam/gpt-3.5-turbo-0613 --failed_only

GCG attack (GPU required)

Run the GCG attack with randomized scenario parameters in each iteration:

cd gcg_attack
python main_gcg.py --model /path/to/model --conv_template <template_name> --scenario Authentication --behavior withholdsecret

Output logs will be stored in logs/gcg_attack.

To then evaluate models on the direct_request test cases with the resulting GCG suffixes:

python scripts/evaluate.py --provider vllm --model /path/to/model --suffix_dir logs/gcg_attack/<model_name> --test_dir data/direct_request --output_dir logs/direct_request_gcg

Fine-tuning

To reproduce our fine-tuning experiments with Llama-2 7B Chat on the basic_like test cases:

cd finetune
./finetune_llama.sh

We used 4x A100-80G GPUs for fine-tuning Llama-2 7B Chat and Mistral 7B Instruct, you may be able to adjust deepspeed settings to run on smaller/fewer GPUs.

Conversation Templates

When evaluating community models, we mostly rely on FastChat conversation templates (documented in model_templates.yaml) with the exception of a few custom templates added to llm_rules/templates.py.

Citation

@article{mu2023rules,
    title={Can LLMs Follow Simple Rules?},
    author={Norman Mu and Sarah Chen and
            Zifan Wang and Sizhe Chen and David Karamardian and
            Lulwa Aljeraisy and Basel Alomair and
            Dan Hendrycks and David Wagner},
    journal={arXiv},
    year={2023}
}