• Stars
    star
    162
  • Rank 232,284 (Top 5 %)
  • Language
    Python
  • License
    MIT License
  • Created over 1 year ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A tool for generating function arguments and choosing what function to call with local LLMs

Local LLM function calling

Documentation Status PyPI version

Overview

The local-llm-function-calling project is designed to constrain the generation of Hugging Face text generation models by enforcing a JSON schema and facilitating the formulation of prompts for function calls, similar to OpenAI's function calling feature, but actually enforcing the schema unlike OpenAI.

The project provides a Generator class that allows users to easily generate text while ensuring compliance with the provided prompt and JSON schema. By utilizing the local-llm-function-calling library, users can conveniently control the output of text generation models. It uses my own quickly sketched json-schema-enforcer project as the enforcer.

Features

  • Constrains the generation of Hugging Face text generation models to follow a JSON schema.
  • Provides a mechanism for formulating prompts for function calls, enabling precise data extraction and formatting.
  • Simplifies the text generation process through a user-friendly Generator class.

Installation

To install the local-llm-function-calling library, use the following command:

pip install local-llm-function-calling

Usage

Here's a simple example demonstrating how to use local-llm-function-calling:

from local_llm_function_calling import Generator

# Define a function and models
functions = [
    {
        "name": "get_current_weather",
        "description": "Get the current weather in a given location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The city and state, e.g. San Francisco, CA",
                    "maxLength": 20,
                },
                "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
            },
            "required": ["location"],
        },
    }
]

# Initialize the generator with the Hugging Face model and our functions
generator = Generator.hf(functions, "gpt2")

# Generate text using a prompt
function_call = generator.generate("What is the weather like today in Brooklyn?")
print(function_call)

Custom constraints

You don't have to use my prompting methods; you can craft your own prompts and your own constraints, and still benefit from the constrained generation:

from local_llm_function_calling import Constrainer
from local_llm_function_calling.model.huggingface import HuggingfaceModel

# Define your own constraint
# (you can also use local_llm_function_calling.JsonSchemaConstraint)
def lowercase_sentence_constraint(text: str):
    # Has to return (is_valid, is_complete)
    return [text.islower(), text.endswith(".")]

# Create the constrainer
constrainer = Constrainer(HuggingfaceModel("gpt2"))

# Generate your text
generated = constrainer.generate("Prefix.\n", lowercase_sentence_constraint, max_len=10)

Extending and Customizing

To extend or customize the prompt structure, you can subclass the TextPrompter class. This allows you to modify the prompt generation process according to your specific requirements.