• Stars
    star
    3,619
  • Rank 12,240 (Top 0.3 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 2 years ago
  • Updated 6 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A language for constraint-guided and efficient LLM programming.
Logo

LMQL

A programming language for large language models.
Documentation Β»

Explore Examples Β· Playground IDE Β· Report Bug

PyPI version

LMQL is a programming language for large language models (LLMs) based on a superset of Python. LMQL offers a novel way of interweaving traditional programming with the ability to call LLMs in your code. It goes beyond traditional templating languages by integrating LLM interaction natively at the level of your program code.

Explore LMQL

An LMQL program reads like standard Python, but top-level strings are interpreted as query strings: They are passed to an LLM, where template variables like [GREETINGS] are automatically completed by the model:

"Greet LMQL:[GREETINGS]\n" where stops_at(GREETINGS, ".") and not "\n" in GREETINGS

if "Hi there" in GREETINGS:
    "Can you reformulate your greeting in the speech of \
     victorian-era English: [VIC_GREETINGS]\n" where stops_at(VIC_GREETINGS, ".")

"Analyse what part of this response makes it typically victorian:\n"

for i in range(4):
    "-[THOUGHT]\n" where stops_at(THOUGHT, ".")

"To summarize:[SUMMARY]"

Program Output:


LMQL allows you to express programs that contain both, traditional algorithmic logic, and LLM calls. At any point during execution, you can prompt an LLM on program variables in combination with standard natural language prompting, to leverage model reasoning capabilities in the context of your program.

To better control LLM behavior, you can use the where keyword to specify constraints and data types of the generated text. This enables guidance of the model's reasoning process, and constraining of intermediate outputs using an expressive constraint language.

Beyond this linear form of scripting, LMQL also supports a number of decoding algorithms to execute your program, such as argmax, sample or even advanced branching decoders like beam search and best_k.

Learn more about LMQL by exploring thne Example Showcase, by running your own programs in our browser-based Playground IDE or by reading the documentation.

Feature Overview

LMQL is designed to make working with language models like OpenAI and πŸ€— Transformers more efficient and powerful through its advanced functionality, including multi-variable templates, conditional distributions, constraints, datatypes and control flow.

Getting Started

To install the latest version of LMQL run the following command with Python ==3.10 installed.

pip install lmql

Local GPU Support: If you want to run models on a local GPU, make sure to install LMQL in an environment with a GPU-enabled installation of PyTorch >= 1.11 (cf. https://pytorch.org/get-started/locally/) and install via pip install lmql[hf].

Running LMQL Programs

After installation, you can launch the LMQL playground IDE with the following command:

lmql playground

Using the LMQL playground requires an installation of Node.js. If you are in a conda-managed environment you can install node.js via conda install nodejs=14.20 -c conda-forge. Otherwise, please see the official Node.js website https://nodejs.org/en/download/ for instructions how to install it on your system.

This launches a browser-based playground IDE, including a showcase of many exemplary LMQL programs. If the IDE does not launch automatically, go to http://localhost:3000.

Alternatively, lmql run can be used to execute local .lmql files. Note that when using local HuggingFace Transformers models in the Playground IDE or via lmql run, you have to first launch an instance of the LMQL Inference API for the corresponding model via the command lmql serve-model.

Configuring OpenAI API Credentials

If you want to use OpenAI models, you have to configure your API credentials. To do so, create a file api.env in the active working directory, with the following contents.

openai-org: <org identifier>
openai-secret: <api secret>

For system-wide configuration, you can also create an api.env file at $HOME/.lmql/api.env or at the project root of your LMQL distribution (e.g. src/ in a development copy).

Installing the Latest Development Version

To install the latest (bleeding-edge) version of LMQL, you can also run the following command:

pip install git+https://github.com/eth-sri/lmql

This will install the lmql package directly from the main branch of this repository. We do not continously test the main version, so it may be less stable than the latest PyPI release.

Contributing

LMQL is a community-centric project. If you are interested in contributing to LMQL, please see the contributing guidelines for more information, and reach out to us via Discord. We are looking forward to your contributions!

Setting Up a Development Environment

To setup a conda environment for local LMQL development with GPU support, run the following commands:

# prepare conda environment
conda env create -f scripts/conda/requirements.yml -n lmql
conda activate lmql

# registers the `lmql` command in the current shell
source scripts/activate-dev.sh

Operating System: The GPU-enabled version of LMQL was tested to work on Ubuntu 22.04 with CUDA 12.0 and Windows 10 via WSL2 and CUDA 11.7. The no-GPU version (see below) was tested to work on Ubuntu 22.04 and macOS 13.2 Ventura or Windows 10 via WSL2.

Development without GPU

This section outlines how to setup an LMQL development environment without local GPU support. Note that LMQL without local GPU support only supports the use of API-integrated models like openai/text-davinci-003. Please see the OpenAI API documentation (https://platform.openai.com/docs/models/gpt-3-5) to learn more about the set of available models.

To setup a conda environment for LMQL with no GPU support, run the following commands:

# prepare conda environment
conda env create -f scripts/conda/requirements-no-gpu.yml -n lmql-no-gpu
conda activate lmql-no-gpu

# registers the `lmql` command in the current shell
source scripts/activate-dev.sh

More Repositories

1

silq

Q#
608
star
2

securify2

Securify v2.0
Solidity
587
star
3

debin

Machine Learning to Deobfuscate Binaries
Python
412
star
4

eran

ETH Robustness Analyzer for Deep Neural Networks
Python
313
star
5

diffai

A certifiable defense against adversarial examples by training neural networks to be provably robust
Python
217
star
6

securify

[DEPRECATED] Security Scanner for Ethereum Smart Contracts
Java
215
star
7

Nice2Predict

Learning framework for program property prediction
C++
201
star
8

language-model-arithmetic

Controlled Text Generation via Language Model Arithmetic
Python
201
star
9

ilf

AI based fuzzer based on imitation learning
Python
149
star
10

ELINA

ELINA: ETH LIbrary for Numerical Analysis
C++
129
star
11

psi

Exact Inference Engine for Probabilistic Programs
JetBrains MPS
123
star
12

sven

Python
95
star
13

dl2

DL2 is a framework that allows training neural networks with logical constraints over numerical values in the network (e.g. inputs, outputs, weights) and to query networks for inputs fulfilling a logical formula.
Python
82
star
14

zkay

A programming language and compiler which enable automatic compilation of intuitive data privacy specifications to NIZK-enabled private smart contracts.
Python
81
star
15

astarix

AStarix: Fast and Optimal Sequence-to-Graph Aligner
C++
72
star
16

TFix

JavaScript
66
star
17

fastsmt

Learning to Solve SMT Formulas Fast
SMT
63
star
18

learch

C++
38
star
19

llmprivacy

Python
36
star
20

soltix

SOLTIX: Scalable automated framework for testing Solidity compilers.
Java
33
star
21

ChatProtect

This is the code for the paper "Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation".
Python
33
star
22

probabilistic-forecasts-attacks

Python
30
star
23

colt

Convex Layerwise Adversarial Training (COLT)
Python
29
star
24

SafeCoder

Python
27
star
25

lcifr

Learning Certified Individually Fair Representations
Python
24
star
26

adaptive-auto-attack

Python
23
star
27

dp-sniper

A machine-learning-based tool for discovering differential privacy violations in black-box algorithms.
Python
23
star
28

verx-benchmarks

20
star
29

lamp

LAMP: Extracting Text from Gradients with Language Model Priors (NeurIPS '22)
Python
20
star
30

dp-finder

Differential Privacy Testing System
Python
19
star
31

bayonet

Probabilistic Computer Network Analysis
D
18
star
32

phoenix

Private and Reliable Neural Network Inference (CCS '22)
C++
18
star
33

fnf

Python
16
star
34

EventRacer

A race detection tool for event driven applications.
C++
16
star
35

learning-real-bug-detector

Python
16
star
36

lassi

Latent Space Smoothing for Individually Fair Representations (ECCV 2022)
Python
15
star
37

deepg

Certifying Geometric Robustness of Neural Networks
Python
15
star
38

vscode-silq

TypeScript
15
star
39

zapper

Rust
15
star
40

robust-code

Adversarial Robustness for Code
Python
13
star
41

watermark-stealing

Watermark Stealing in Large Language Models (ICML '24)
Python
13
star
42

guiding-synthesizers

Guiding Program Synthesis by Learning to Generate Examples
Python
12
star
43

learning-to-configure-networks

[NeurIPS'22] Learning to Configure Computer Networks with Neural Algorithmic Reasoning
12
star
44

SABR

Python
11
star
45

bayes-framework-leakage

Python
11
star
46

smoothing-ensembles

[ICLR 2022] Boosting Randomized Smoothing with Variance Reduced Classifiers
Python
11
star
47

UniversalCertificationTheory

Universal Approximation with Certified Networks
Python
10
star
48

llm-quantization-attack

Python
10
star
49

eth-sri.github.io

SRI Group Website
HTML
9
star
50

ModelsPHOG

Synthesized models for PHOG to make the results reproducible by the research community
C++
9
star
51

segmentation-smoothing

Provable robustness for segmentation tasks.
9
star
52

3dcertify

3DCertify is the first verifier to certify robustness of point cloud models against semantic transformations and point perturbations
Python
8
star
53

prover

Verifier for Deep Neural Network Audio Processing
Python
7
star
54

proof-sharing

CAV'22 paper to speed up Neural Network Verification.
Python
7
star
55

mn-bab

[ICLR 2022] Complete Verification via Multi-Neuron Relaxation Guided Branch-and-Bound
Python
7
star
56

ACE

Python
7
star
57

DFENCE

Dynamic Analysis and Synthesis System for Relaxed Memory Models
C++
6
star
58

Delta-Siege

Python
6
star
59

automated-error-analysis

Automated Classification of Model Errors on ImageNet (NeurIPS 2023)
Jupyter Notebook
6
star
60

R4

C++
5
star
61

drs

[NeurIPS 2022] (De-)Randomized Smoothing for Decision Stump Ensembles
Terra
4
star
62

paradox

On the Paradox of Certified Training (TMLR 10/2022)
Python
4
star
63

fare

FARE: Provably Fair Representation Learning with Practical Certificates (ICML '23)
Shell
4
star
64

Unqomp

Automated Uncomputation for Quantum Programs
Python
4
star
65

fairness-feedback-nlp

Human-Guided Fair Classification for NLP (ICLR 2023, Spotlight)
Python
4
star
66

Spire

C#
3
star
67

TAPS

Python
3
star
68

inferui

InferUI: Robust Relational Layouts Synthesis from Examples for Android
C++
3
star
69

abstraqt

OpenQASM
3
star
70

transformation-smoothing

Randomized Smoothing for Parametric (Image) Transformations
Python
3
star
71

cuts

Python
3
star
72

ACES

[SRML@ICLR 2022] Robust and Accurate -- Compositional Architectures for Randomized Smoothing
Python
2
star
73

synthetiq

OpenQASM
2
star
74

DeepT

Python
2
star
75

ncm

Trace Based Supervision for Neural Architectures
2
star
76

malicious-contamination

Python
2
star
77

CRAFT

Python
1
star
78

fedavg_leakage

Python
1
star
79

Reqomp

Python
1
star
80

ibp-propagation-tightness

Python
1
star
81

tableak

TabLeak: Tabular Data Leakage in Federated Learning
1
star
82

domino

1
star