• Stars
    star
    241
  • Rank 166,673 (Top 4 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 1 year ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Lord of LLMS

Lord of Large Language Models (LoLLMs)

Logo

GitHub license GitHub issues GitHub stars GitHub forks Discord Follow me on Twitter Follow Me on YouTube Downloads Downloads Downloads

Lord of Large Language Models (LoLLMs) Server is a text generation server based on large language models. It provides a Flask-based API for generating text using various pre-trained language models. This server is designed to be easy to install and use, allowing developers to integrate powerful text generation capabilities into their applications.

Features

  • Fully integrated library with access to bindings, personalities and helper tools.
  • Generate text using large language models.
  • Supports multiple personalities for generating text with different styles and tones.
  • Real-time text generation with WebSocket-based communication.
  • RESTful API for listing personalities and adding new personalities.
  • Easy integration with various applications and frameworks.
  • Possibility to send files to personalities
  • Possibility to run on multiple nodes and provide a generation service to many outputs at once.
  • Data stays local even in the remote version. Only generations are sent to the host node. The logs, data and discussion history are kept in your local disucssion folder.

Installation

You can install LoLLMs using pip, the Python package manager. Open your terminal or command prompt and run the following command:

pip install --upgrade lollms

Or if you want to get the latest version from the git:

pip install --upgrade git+https://github.com/ParisNeo/lollms.git

GPU support

If you want to use cuda. Either install it directly or use conda to install everything:

conda create --name lollms python=3.10

Activate the environment

conda activate lollms

Install cudatoolkit

conda install -c anaconda cudatoolkit

Install lollms

pip install --upgrade lollms

Now you are ready.

To simply configure your environment run the settings app:

lollms-settings

The tool is intuitive and will guide you through configuration process.

The first time you will be prompted to select a binding. image

Once the binding is selected, you have to install at least a model. You have two options:

1- install from internet. Just give the link to a model on hugging face. For example. if you select the default llamacpp python bindings (7), you can install this model:

https://huggingface.co/TheBloke/airoboros-7b-gpt4-GGML/resolve/main/airoboros-7b-gpt4.ggmlv3.q4_1.bin

2- install from local drive. Just give the path to a model on your pc. The model will not be copied. We only create a reference to the model. This is useful if you use multiple clients so that you can mutualize your models with other tools.

Now you are ready to use the server.

Library example

Here is the smallest possible example that allows you to use the full potential of the tool with nearly no code

from lollms.console import Conversation 

cv = Conversation(None)
cv.start_conversation()

Now you can reimplement the start_conversation method to do the things you want:

from lollms.console import Conversation 

class MyConversation(Conversation):
  def __init__(self, cfg=None):
    super().__init__(cfg, show_welcome_message=False)

  def start_conversation(self):
    prompt = "Once apon a time"
    def callback(text, type=None):
        print(text, end="", flush=True)
        return True
    print(prompt, end="", flush=True)
    output = self.safe_generate(prompt, callback=callback)

if __name__ == '__main__':
  cv = MyConversation()
  cv.start_conversation()

Or if you want here is a conversation tool written in few lines

from lollms.console import Conversation 

class MyConversation(Conversation):
  def __init__(self, cfg=None):
    super().__init__(cfg, show_welcome_message=False)

  def start_conversation(self):
    full_discussion=""
    while True:
      prompt = input("You: ")
      if prompt=="exit":
        return
      if prompt=="menu":
        self.menu.main_menu()
      full_discussion += self.personality.user_message_prefix+prompt+self.personality.link_text
      full_discussion += self.personality.ai_message_prefix
      def callback(text, type=None):
          print(text, end="", flush=True)
          return True
      print(self.personality.name+": ",end="",flush=True)
      output = self.safe_generate(full_discussion, callback=callback)
      full_discussion += output.strip()+self.personality.link_text
      print()

if __name__ == '__main__':
  cv = MyConversation()
  cv.start_conversation()

Here we use the safe_generate method that does all the cropping for you ,so you can chat forever and will never run out of context.

Socket IO Server Usage

Once installed, you can start the LoLLMs Server using the lollms-server command followed by the desired parameters.

lollms-server --host <host> --port <port> --config <config_file> --bindings_path <bindings_path> --personalities_path <personalities_path> --models_path <models_path> --binding_name <binding_name> --model_name <model_name> --personality_full_name <personality_full_name>

Parameters

  • --host: The hostname or IP address to bind the server (default: localhost).
  • --port: The port number to run the server (default: 9600).
  • --config: Path to the configuration file (default: None).
  • --bindings_path: The path to the Bindings folder (default: "./bindings_zoo").
  • --personalities_path: The path to the personalities folder (default: "./personalities_zoo").
  • --models_path: The path to the models folder (default: "./models").
  • --binding_name: The default binding to be used (default: "llama_cpp_official").
  • --model_name: The default model name (default: "Manticore-13B.ggmlv3.q4_0.bin").
  • --personality_full_name: The full name of the default personality (default: "personality").

Examples

Start the server with default settings:

lollms-server

Start the server on a specific host and port:

lollms-server --host 0.0.0.0 --port 5000

API Endpoints

WebSocket Events

  • connect: Triggered when a client connects to the server.
  • disconnect: Triggered when a client disconnects from the server.
  • list_personalities: List all available personalities.
  • add_personality: Add a new personality to the server.
  • generate_text: Generate text based on the provided prompt and selected personality.

RESTful API

  • GET /personalities: List all available personalities.
  • POST /personalities: Add a new personality to the server.

Sure! Here are examples of how to communicate with the LoLLMs Server using JavaScript and Python.

JavaScript Example

// Establish a WebSocket connection with the server
const socket = io.connect('http://localhost:9600');

// Event: When connected to the server
socket.on('connect', () => {
  console.log('Connected to the server');

  // Request the list of available personalities
  socket.emit('list_personalities');
});

// Event: Receive the list of personalities from the server
socket.on('personalities_list', (data) => {
  const personalities = data.personalities;
  console.log('Available Personalities:', personalities);

  // Select a personality and send a text generation request
  const selectedPersonality = personalities[0];
  const prompt = 'Once upon a time...';
  socket.emit('generate_text', { personality: selectedPersonality, prompt: prompt });
});

// Event: Receive the generated text from the server
socket.on('text_generated', (data) => {
  const generatedText = data.text;
  console.log('Generated Text:', generatedText);

  // Do something with the generated text
});

// Event: When disconnected from the server
socket.on('disconnect', () => {
  console.log('Disconnected from the server');
});

Python Example

import socketio

# Create a SocketIO client
sio = socketio.Client()

# Event: When connected to the server
@sio.on('connect')
def on_connect():
    print('Connected to the server')

    # Request the list of available personalities
    sio.emit('list_personalities')

# Event: Receive the list of personalities from the server
@sio.on('personalities_list')
def on_personalities_list(data):
    personalities = data['personalities']
    print('Available Personalities:', personalities)

    # Select a personality and send a text generation request
    selected_personality = personalities[0]
    prompt = 'Once upon a time...'
    sio.emit('generate_text', {'personality': selected_personality, 'prompt': prompt})

# Event: Receive the generated text from the server
@sio.on('text_generated')
def on_text_generated(data):
    generated_text = data['text']
    print('Generated Text:', generated_text)

    # Do something with the generated text

# Event: When disconnected from the server
@sio.on('disconnect')
def on_disconnect():
    print('Disconnected from the server')

# Connect to the server
sio.connect('http://localhost:9600')

# Keep the client running
sio.wait()

Make sure to have the necessary dependencies installed for the JavaScript and Python examples. For JavaScript, you need the socket.io-client package, and for Python, you need the python-socketio package.

Contributing

Contributions to the LoLLMs Server project are welcome and appreciated. If you would like to contribute, please follow the guidelines outlined in the CONTRIBUTING.md file.

License

LoLLMs Server is licensed under the Apache 2.0 License. See the LICENSE file for more information.

Repository

The source code for LoLLMs Server can be found on GitHub

More Repositories

1

lollms-webui

Lord of Large Language Models Web User Interface
Vue
3,881
star
2

Gpt4All-webui

A web user interface for GPT4All
CSS
173
star
3

prompt_translator

A stable diffusion extension for translating prompts from 50 languages. The objective is to give users the possibility to use their own language to perform text prompting.
Python
155
star
4

ollama_proxy_server

A proxy server for multiple ollama instances with Key security
Python
122
star
5

PyAIPersonality

A library for defining AI personalities for AI based models.We define a file format, assets and personalized scripts.
Python
52
star
6

chatgpt-personality-selector

A tool that boosts chatgpt to its maximum potential
JavaScript
37
star
7

FaceAnalyzer

A python library for face detection and features extraction based on mediapipe library
Python
37
star
8

QGraphViz

A PyQT based GraphViz builder/renderer
Python
25
star
9

gpt4pandas

The power of GPT4All mixed with the power of pandas
Python
24
star
10

QPanda3D

Panda3D wrapper for PyQt5
Python
18
star
11

lollms_personalities_zoo

Lord of LLMS personalities zoo
Python
17
star
12

GPT4All_Personalities

This is a repo to store GPT4ALL personalities
15
star
13

petals_server_installer

An installer tool for petals decentralized text generation network
Python
10
star
14

lollms_bindings_zoo

Python
10
star
15

lollms-for-vscode

JavaScript
9
star
16

lollms_nodes_suite

A suite of lollms Comfyui nodes
Python
9
star
17

safe_store

A data indexing library 100% open source with no need to use any closed source embeddings or opaque code.
Python
8
star
18

lollms-playground

A simple app to use lollms. Served using node.js
HTML
8
star
19

CheatSheets

A list of cheat sheets to common use of stuff on a computer
8
star
20

lollms_extensions_zoo

Extensions for lollms tool
Python
7
star
21

NeuroVoyance

Distributed AI with Cryptocurrency rewarding
CSS
7
star
22

gpt4all_Tools

Some tools for gpt4all
5
star
23

chatgpt_extensions

A set of tools for chatgpt integration with open source applications
Python
5
star
24

vllm_proxy_server

A vllm proxy server to add security and multi model management for vllm servers
Python
5
star
25

models_zoo

From now on, models are not dispatched by binding but by type. Here we will have all the models organized and ready to use by lollms or other tools
Python
4
star
26

lollms_cpp_client

A client for lollms server
C++
4
star
27

Bleak_BLE_tools

A set of tools to use bluetooth devices with python
Python
4
star
28

Net_enabled-GPT4All-Extension

An extension for gpt4all that offers a special personality that indicates to the chatbot that whenever the user is asking a question it has no answer to, it should invoke a search function. The extension intercepts this keyword, do the research on the net then mirror it back to the AI. The AI can then use those inputs to formulate an answer.
4
star
29

pyconn-monitor

A Python library to monitor and log network connections of untrusted programs.
Python
4
star
30

HandsAnalyzer

A library based on mediapipe to analyze hands posture and gesture
Python
4
star
31

BigThought

To find the answer to life the universe and everything
Python
3
star
32

GPTQ_backend

A Backend for GPT4All that uses GPTQ.
Python
3
star
33

GPT4All_GPTJ_backend

A GPTJ backend for GPT4ALL-ui
Python
2
star
34

GPT4All-Models-Tester-Extension

A Gpt4all-ui extension that tests multiple models using a text file with multiple questions
2
star
35

ascii_colors

A python library for displaying stuff on the console in a pretty way
Python
2
star
36

uart

Simple Linux C Uart library to help using serial communication on Linux platforms in a very simple way.
C
2
star
37

lollms_client_js

JavaScript
2
star
38

petals_server

A FastAPI server for petals
Python
2
star
39

lollms_proxy_server

A proxy to serve multiple lollms instances with authentication
2
star
40

Lollms-Snake-Game

Experience the classic snake game in Python with Lollms Snake Game. Created using the Pygame library, this game features a snake that eats food and avoids collisions. Enjoy the nostalgic gameplay and challenge yourself with Python!
Python
1
star
41

lollms_apps_zoo

A zoo of applications for lollms
1
star
42

oopygame

Object oriented wrapper on pygame to simplify making UI
Python
1
star
43

TextPinner

A tool to pin text to a specific set of texts. Useful to build a tool that takes any text input then infer which one of the anchor texts is the best one.
Python
1
star
44

GPT4PI

Bringing GPT power to raspberry PI
Shell
1
star
45

udp

Simple Linux C UDP library to help using udp communication on Linux platforms in a very simple way.
C
1
star
46

blockchain

A training to build a blockchain using python.
Python
1
star
47

chatgpt-audio-extension

A chrome extention to add audio interaction with chatgpt
JavaScript
1
star
48

TodoList

A simple webapp to build a todolist
HTML
1
star
49

ChatGPT-Paper-Survey

JavaScript
1
star
50

mediapipe_face_landmarks

List of landmarks for Mediapipe's face 468 landmarks extractor
1
star
51

GameOfLife

A simulated word created to see what can happen in a fully random and free universe with simple rules
Python
1
star
52

cfg

Simple Linux C configuration library to help using simplified configuration files in your projects.
C
1
star
53

PoseAnalyzer

A library for Human pose analysis, based on mediapipe library
Python
1
star
54

LollmsROSGateway

A gateway between Lollms and ROS. To allow your robots to be smarter and powered by lollms
1
star
55

lollms_server_proxy

A proxy to use lollms as a multi user remote server for text/image/audio/video generation
Python
1
star
56

elastic_search_server

A server for elastic search using fastapi
1
star
57

AI_Apps

Apps build using lollms
HTML
1
star
58

pipmaster

pymanage: A simple and versatile Python package manager for automating installation and verification across platforms.
Python
1
star
59

lollms_image_gen_zoo

A zoo for image generation models for the lollms project
1
star
60

comfyui_proxy_server

A proxy server for comfyui
Python
1
star
61

InvokeAI

This version of CompVis/stable-diffusion features an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, a WebGUI, and multiple features and other enhancements.
Jupyter Notebook
1
star