🛡️Guardrail ML
Guardrail ML is toolkit for developers to safely bring AI from prototype to production. Our SDK helps build production-grade LLM applications quickly and reliably.
Benefits
- 🚀build production-grade LLM applications quickly and reliably
- 📝customize to your unique use case and automate workflows
- 💸improve performance and reduce cost, and deploy with confidence
Features
- 🛠️ evaluate and track prompts and LLM outputs with automated text and NLP metrics
- 🤖 benchmark domain-specific tasks with automated agent simulated conversations
- 🛡️ safeguard LLMs with our customizable firewall and enforce policies with guardrails
Quickstart
Installation 💻
-
Get API Key
-
To install guardchain, use the Python Package Index (PyPI) as follows:
pip install guardrail-ml
Usage 🛡️🔗
from guardrail.client import run_metrics
from guardrail.client import run_simple_metrics
from guardrail.client import create_dataset
# Output/Prompt Metrics
run_metrics(output="Guardrail is an open-source toolkit for building domain-specific language models with confidence. From domain-specific dataset creation and custom evaluations to safeguarding and redteaming aligned with policies, our tools accelerates your LLM workflows to systematically derisk deployment.",
prompt="What is guardrail-ml?",
model_uri="llama-v2-guanaco")
# View Logs
con = sqlite3.connect("logs.db")
df = pd.read_sql_query("SELECT * from logs", con)
df.tail(20)
# Generate Dataset from PDF
create_dataset(model="OpenAssistant/falcon-7b-sft-mix-2000",
tokenizer="OpenAssistant/falcon-7b-sft-mix-2000",
file_path="example-docs/Medicare Appeals Paper FINAL.pdf",
output_path="./output.json",
load_in_4bit=True)
More Colab Notebooks
4-bit QLoRA of llama-v2-7b
with dolly-15k
(07/21/23):
Fine-Tuning Dolly 2.0 with LoRA: