• Stars
    star
    23,519
  • Rank 963 (Top 0.02 %)
  • Language
    Python
  • License
    MIT License
  • Created over 1 year ago
  • Updated 7 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf

JARVIS

This project is under construction and we will have all the code ready soon.

arXiv Open in Spaces

Updates

  • [2023.04.16] Jarvis now supports the OpenAI service on the Azure platform and the GPT-4 model.
  • [2023.04.06] We added the Gradio demo and built the web API for /tasks and /results in server mode.
    • The Gradio demo is now hosted on Hugging Face Space. (Build with inference_mode=hybrid and local_deployment=standard)
    • The Web API /tasks and /results access intermediate results for Stage #1: task planning and Stage #1-3: model selection with execution results. See here.
  • [2023.04.03] We added the CLI mode and provided parameters for configuring the scale of local endpoints.
    • You can enjoy a lightweight experience with Jarvis without deploying the models locally. See here.
    • Just run python awesome_chat.py --config configs/config.lite.yaml to experience it.
  • [2023.04.01] We updated a version of code for building.

Overview

Language serves as an interface for LLMs to connect numerous AI models for solving complicated AI tasks!

See our paper: HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace, Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu and Yueting Zhuang (the first two authors contribute equally)

We introduce a collaborative system that consists of an LLM as the controller and numerous expert models as collaborative executors (from HuggingFace Hub). The workflow of our system consists of four stages:

  • Task Planning: Using ChatGPT to analyze the requests of users to understand their intention, and disassemble them into possible solvable tasks.
  • Model Selection: To solve the planned tasks, ChatGPT selects expert models hosted on Hugging Face based on their descriptions.
  • Task Execution: Invokes and executes each selected model, and return the results to ChatGPT.
  • Response Generation: Finally, using ChatGPT to integrate the prediction of all models, and generate responses.

System Requirements

Default (Recommended)

For configs/config.default.yaml:

  • Ubuntu 16.04 LTS
  • VRAM >= 24GB
  • RAM > 12GB (minimal), 16GB (standard), 80GB (full)
  • Disk > 284GB
    • 42GB for damo-vilab/text-to-video-ms-1.7b
    • 126GB for ControlNet
    • 66GB for stable-diffusion-v1-5
    • 50GB for others

Minimum (Lite)

For configs/config.lite.yaml:

  • Ubuntu 16.04 LTS
  • Nothing else

The configuration configs/config.lite.yaml does not require any expert models to be downloaded and deployed locally. However, it means that Jarvis is restricted to models running stably on HuggingFace Inference Endpoints.

Quick Start

First replace openai.key and huggingface.token in server/configs/config.default.yaml with your personal OpenAI Key and your Hugging Face Token, or put them in the environment variables OPENAI_API_KEY and HUGGINGFACE_ACCESS_TOKEN respectfully. Then run the following commands:

For Server:

# setup env
cd server
conda create -n jarvis python=3.8
conda activate jarvis
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
pip install -r requirements.txt

# download models. Make sure that `git-lfs` is installed.
cd models
bash download.sh # required when `inference_mode` is `local` or `hybrid`. 

# run server
cd ..
python models_server.py --config configs/config.default.yaml # required when `inference_mode` is `local` or `hybrid`
python awesome_chat.py --config configs/config.default.yaml --mode server # for text-davinci-003

Now you can access Jarvis' services by the Web API.

  • /hugginggpt --method POST, access the full service.
  • /tasks --method POST, access intermediate results for Stage #1.
  • /results --method POST, access intermediate results for Stage #1-3.

For example:

# request
curl --location 'http://localhost:8004/tasks' \
--header 'Content-Type: application/json' \
--data '{
    "messages": [
        {
            "role": "user",
            "content": "based on pose of /examples/d.jpg and content of /examples/e.jpg, please show me a new image"
        }
    ]
}'

# response
[{"args":{"image":"/examples/d.jpg"},"dep":[-1],"id":0,"task":"openpose-control"},{"args":{"image":"/examples/e.jpg"},"dep":[-1],"id":1,"task":"image-to-text"},{"args":{"image":"<GENERATED>-0","text":"<GENERATED>-1"},"dep":[1,0],"id":2,"task":"openpose-text-to-image"}]

For Web:

We provide a user-friendly web page. After starting awesome_chat.py in a server mode, you can run the commands to communicate with Jarvis in your browser:

  • you need to install nodejs and npm first.
  • [ IMPORTANT ] if you are running the web client on another machine, you need set http://{LAN_IP_of_the_server}:{port}/ to HUGGINGGPT_BASE_URL of web/src/config/index.ts.
  • if you want to use the video generation feature, you need to compile ffmpeg manually with H.264.
  • you can switch to ChatGPT by double click on the setting icon!
cd web
npm install
npm run dev
# Optional: Install ffmpeg
# This command need be executed without errors.
LD_LIBRARY_PATH=/usr/local/lib /usr/local/bin/ffmpeg -i input.mp4 -vcodec libx264 output.mp4

For Gradio

The Gradio demo is now hosted on Hugging Face Space. You can also run the following commands to start the demo locally:

python models_server.py --config configs/config.gradio.yaml
python run_gradio_demo.py --config configs/config.gradio.yaml

# or run the HF Space as a Docker image (Build with `inference_mode=hibrid` and `local_deployment=standard`)
docker run -it -p 7860:7860 --platform=linux/amd64 registry.hf.space/microsoft-hugginggpt:latest python app.py

For CLI:

You can also run Jarvis more easily in CLI mode:

cd server
python awesome_chat.py --config configs/config.default.yaml --mode cli

Examples of CLI mode:

Welcome to Jarvis! A collaborative system that consists of an LLM as the controller and numerous expert models as collaborative executors. Jarvis can plan tasks, schedule Hugging Face models, generate friendly responses based on your requests, and help you with many things. Please enter your request (`exit` to exit).
[ User ]: Given a collection of image A: /examples/a.jpg, B: /examples/b.jpg, C: /examples/c.jpg, please tell me how many zebras in these picture?
[ Jarvis ]: Based on your request, I have used several models to process the images you provided. For the image A: /examples/a.jpg, I used the image-to-text model nlpconnect/vit-gpt2-image-captioning to generate the text "a cat sitting on a window sill looking out". Then I used the object-detection model facebook/detr-resnet-50 to generate the image with predicted box '/images/a3dc.jpg', which contains two objects with labels 'potted plant' and 'cat'. For the image B: /examples/b.jpg, I used the image-to-text model nlpconnect/vit-gpt2-image-captioning to generate the text "two zebras standing in a field of dry grass". Then I used the object-detection model facebook/detr-resnet-50 to generate the image with predicted box '/images/f5df.jpg', which contains three objects with labels 'zebra'. For the image C: /examples/c.jpg, I used the image-to-text model nlpconnect/vit-gpt2-image-captioning to generate the text "a zebra standing in a field of tall grass". Then I used the object-detection model facebook/detr-resnet-50 to generate the image with predicted box '/images/69c7.jpg', which contains one object with label 'zebra'. Finally, I used the canny-control model canny-control to generate the image '/images/7e18.png' based on the image /examples/f.jpg. Therefore, there are four zebras in these pictures. Is there anything else I can help you with?
[ User ]: Please answer all the named entities in the sentence: Iron Man is a superhero appearing in American comic books published by Marvel Comics. The character was co-created by writer and editor Stan Lee, developed by scripter Larry Lieber, and designed by artists Don Heck and Jack Kirby.
[ Jarvis ]: Sure, I understand your request. Based on the inference results of the models, I will answer your request step by step. Firstly, I used the token-classification model dslim/bert-base-NER to recognize the named entities in the sentence. The inference result is that the named entities are Iron Man, Stan Lee, Larry Lieber, Don Heck and Jack Kirby. Then, I used the question-answering model deepset/roberta-base-squad2 to answer your request. The inference result is that there is no output since the context cannot be empty. Therefore, I cannot make it. I hope this answer your request. Is there anything else I can help you with?
[ User ]: Please generate a canny image based on /examples/f.jpg
[ Jarvis ]: Sure. I have generated a canny image based on /examples/f.jpg. To do this, I first used the image-to-text model nlpconnect/vit-gpt2-image-captioning to generate the text description of the image, which is "a herd of giraffes and zebras grazing in a field". Then I used the canny-control model to generate the canny image of the original image, which is stored in the path "/images/1ea2.png". Finally, I used the canny-text-to-image model lllyasviel/sd-controlnet-canny to generate the canny image based on the text description and the canny image, which is stored in the path "/images/ba66.png". Do you need anything else?

Configuration

The server-side configuration file is server/configs/config.default.yaml, and some parameters are presented as follows:

  • model: LLM, currently supports text-davinci-003. We are working on integrating more open-source LLMs.
  • inference_mode: mode of inference endpoints
    • local: only use the local inference endpoints
    • huggingface: only use the Hugging Face Inference Endpoints (free of local inference endpoints)
    • hybrid: both of local and huggingface
  • local_deployment: scale of locally deployed models, works under local or hybrid inference mode:
    • minimal (RAM>12GB, ControlNet only)
    • standard (RAM>16GB, ControlNet + Standard Pipelines)
    • full (RAM>42GB, All registered models)

On a personal laptop, we recommend the configuration of inference_mode: hybrid and local_deployment: minimal. But the available models under this setting may be limited due to the instability of remote Hugging Face Inference Endpoints.

NVIDIA Jetson Embedded Device Support

A Dockerfile is included that provides experimental support for NVIDIA Jetson embedded devices. This image provides accelerated ffmpeg, pytorch, torchaudio, and torchvision dependencies. To build the docker image, ensure that the default docker runtime is set to 'nvidia'. A pre-built image is provided at https://hub.docker.com/r/toolboc/nv-jarvis.

#Build the docker image
docker build --pull --rm -f "Dockerfile.jetson" -t toolboc/nv-jarvis:r35.2.1 

Due to to memory requirements, JARVIS is required to run on Jetson AGX Orin family devices (64G on-board RAM device preferred) with config options set to:

  • inference_mode: local
  • local_deployment: standard

Models and configs are recommended to be provided through a volume mount from the host to the container as shown in the docker run step below. It is possible to uncomment the # Download local models section of the Dockerfile to build a container with models included.

Start the model server, awesomechat, and web app on Jetson Orin AGX

# run the container which will automatically start the model server
docker run --name jarvis --net=host --gpus all -v ~/jarvis/configs:/app/server/configs -v ~/src/JARVIS/server/models:/app/server/models toolboc/nv-jarvis:r35.2.1

# (wait for model server to complete initialization)

# start awesome_chat.py 
docker exec jarvis python3 awesome_chat.py --config configs/config.default.yaml --mode server

#start the web application (application will be acessible at http://localhost:9999)
docker exec jarvis npm run dev --prefix=/app/web

Screenshots

Citation

If you find this work useful in your method, you can cite the paper as below:

@article{shen2023hugginggpt,
    title   = {HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace},
    author  = {Shen, Yongliang and Song, Kaitao and Tan, Xu and Li, Dongsheng and Lu, Weiming and Zhuang, Yueting},
    journal = {arXiv preprint arXiv:2303.17580},
    year    = {2023}
}

Acknowledgement

More Repositories

1

vscode

Visual Studio Code
TypeScript
163,565
star
2

PowerToys

Windows system utilities to maximize productivity
C#
110,602
star
3

TypeScript

TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
TypeScript
100,730
star
4

terminal

The new Windows Terminal and the original Windows console host, all in the same place!
C++
94,835
star
5

Web-Dev-For-Beginners

24 Lessons, 12 Weeks, Get Started as a Web Developer
JavaScript
83,418
star
6

ML-For-Beginners

12 weeks, 26 lessons, 52 quizzes, classic Machine Learning for all
HTML
69,631
star
7

generative-ai-for-beginners

21 Lessons, Get Started Building with Generative AI 🔗 https://microsoft.github.io/generative-ai-for-beginners/
Jupyter Notebook
64,519
star
8

playwright

Playwright is a framework for Web Testing and Automation. It allows testing Chromium, Firefox and WebKit with a single API.
TypeScript
64,013
star
9

monaco-editor

A browser based code editor
JavaScript
35,437
star
10

DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Python
35,130
star
11

AI-For-Beginners

12 Weeks, 24 Lessons, AI for All!
Jupyter Notebook
34,704
star
12

autogen

A programming framework for agentic AI 🤖
Jupyter Notebook
32,470
star
13

MS-DOS

The original sources of MS-DOS 1.25, 2.0, and 4.0 for reference purposes
Assembly
30,714
star
14

Data-Science-For-Beginners

10 Weeks, 20 Lessons, Data Science for All!
Jupyter Notebook
28,136
star
15

calculator

Windows Calculator: A simple yet powerful calculator that ships with Windows
C++
27,371
star
16

cascadia-code

This is a fun, new monospaced font that includes programming ligatures and is designed to enhance the modern look and feel of the Windows Terminal.
Python
25,726
star
17

api-guidelines

Microsoft REST API Guidelines
22,661
star
18

winget-cli

WinGet is the Windows Package Manager. This project includes a CLI (Command Line Interface), PowerShell modules, and a COM (Component Object Model) API (Application Programming Interface).
C++
20,495
star
19

unilm

Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Python
19,889
star
20

vcpkg

C++ Library Manager for Windows, Linux, and MacOS
CMake
19,600
star
21

fluentui

Fluent UI web represents a collection of utilities, React components, and web components for building web applications.
TypeScript
18,419
star
22

semantic-kernel

Integrate cutting-edge LLM technology quickly and easily into your apps
C#
17,792
star
23

graphrag

A modular graph-based Retrieval-Augmented Generation (RAG) system
Python
17,750
star
24

CNTK

Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit
C++
17,412
star
25

WSL

Issues found on WSL
PowerShell
17,372
star
26

LightGBM

A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.
C++
16,470
star
27

AirSim

Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research
C++
16,327
star
28

react-native-windows

A framework for building native Windows apps with React.
C++
16,310
star
29

recommenders

Best Practices on Recommendation Systems
Python
16,075
star
30

IoT-For-Beginners

12 Weeks, 24 Lessons, IoT for All!
C++
15,360
star
31

qlib

Qlib is an AI-oriented quantitative investment platform that aims to realize the potential, empower research, and create value using AI technologies in quantitative investment, from exploring ideas to implementing productions. Qlib supports diverse machine learning modeling paradigms. including supervised learning, market dynamics modeling, and RL.
Python
15,308
star
32

dotnet

This repo is the official home of .NET on GitHub. It's a great starting point to find many .NET OSS projects from Microsoft and the community, including many that are part of the .NET Foundation.
HTML
14,370
star
33

Bringing-Old-Photos-Back-to-Life

Bringing Old Photo Back to Life (CVPR 2020 oral)
Python
14,132
star
34

ai-edu

AI education materials for Chinese students, teachers and IT professionals.
HTML
13,485
star
35

pyright

Static Type Checker for Python
Python
13,195
star
36

nni

An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Python
13,084
star
37

guidance

A guidance language for controlling large language models.
Jupyter Notebook
11,777
star
38

TypeScript-Node-Starter

A reference example for TypeScript and Node with a detailed README describing how to use the two together.
SCSS
11,314
star
39

Swin-Transformer

This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
Python
11,187
star
40

TypeScript-React-Starter

A starter template for TypeScript and React with a detailed README describing how to use the two together.
TypeScript
11,081
star
41

frontend-bootcamp

Frontend Workshop from HTML/CSS/JS to TypeScript/React/Redux
TypeScript
10,807
star
42

mimalloc

mimalloc is a compact general purpose allocator with excellent performance.
C
10,532
star
43

windows-rs

Rust for Windows
Rust
10,411
star
44

wslg

Enabling the Windows Subsystem for Linux to include support for Wayland and X server related scenarios
C++
10,165
star
45

language-server-protocol

Defines a common protocol for language servers.
HTML
10,093
star
46

sql-server-samples

Azure Data SQL Samples - Official Microsoft GitHub Repository containing code samples for SQL Server, Azure SQL, Azure Synapse, and Azure SQL Edge
9,950
star
47

onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
C++
9,837
star
48

fast

The adaptive interface system for modern web experiences.
TypeScript
9,271
star
49

computervision-recipes

Best Practices, code samples, and documentation for Computer Vision.
Jupyter Notebook
9,264
star
50

napajs

Napa.js: a multi-threaded JavaScript runtime
C++
9,256
star
51

Windows-universal-samples

API samples for the Universal Windows Platform.
JavaScript
9,253
star
52

LoRA

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Python
9,145
star
53

fluentui-emoji

A collection of familiar, friendly, and modern emoji from Microsoft
Python
9,068
star
54

vscode-tips-and-tricks

Collection of helpful tips and tricks for VS Code.
9,038
star
55

playwright-python

Python version of the Playwright testing and automation library.
Python
8,990
star
56

STL

MSVC's implementation of the C++ Standard Library.
C++
8,978
star
57

react-native-code-push

React Native module for CodePush
C
8,643
star
58

vscode-extension-samples

Sample code illustrating the VS Code extension API.
TypeScript
8,628
star
59

inshellisense

IDE style command line auto complete
TypeScript
8,402
star
60

reverse-proxy

A toolkit for developing high-performance HTTP reverse proxy applications.
C#
8,398
star
61

reactxp

Library for cross-platform app development.
TypeScript
8,289
star
62

WSL2-Linux-Kernel

The source for the Linux kernel used in Windows Subsystem for Linux 2 (WSL2)
C
8,037
star
63

ailab

Experience, Learn and Code the latest breakthrough innovations with Microsoft AI
C#
7,699
star
64

c9-python-getting-started

Sample code for Channel 9 Python for Beginners course
Jupyter Notebook
7,642
star
65

UFO

A UI-Focused Agent for Windows OS Interaction.
Python
7,633
star
66

cpprestsdk

The C++ REST SDK is a Microsoft project for cloud-based client-server communication in native code using a modern asynchronous C++ API design. This project aims to help C++ developers connect to and interact with services.
C++
7,573
star
67

botframework-sdk

Bot Framework provides the most comprehensive experience for building conversation applications.
JavaScript
7,484
star
68

azuredatastudio

Azure Data Studio is a data management and development tool with connectivity to popular cloud and on-premises databases. Azure Data Studio supports Windows, macOS, and Linux, with immediate capability to connect to Azure SQL and SQL Server. Browse the extension library for more database support options including MySQL, PostreSQL, and MongoDB.
TypeScript
7,182
star
69

winget-pkgs

The Microsoft community Windows Package Manager manifest repository
6,981
star
70

Windows-driver-samples

This repo contains driver samples prepared for use with Microsoft Visual Studio and the Windows Driver Kit (WDK). It contains both Universal Windows Driver and desktop-only driver samples.
C
6,924
star
71

winfile

Original Windows File Manager (winfile) with enhancements
C
6,437
star
72

nlp-recipes

Natural Language Processing Best Practices & Examples
Python
6,379
star
73

WinObjC

Objective-C for Windows
C
6,241
star
74

SandDance

Visually explore, understand, and present your data.
TypeScript
6,091
star
75

VFSForGit

Virtual File System for Git: Enable Git at Enterprise Scale
C#
5,979
star
76

GSL

Guidelines Support Library
C++
5,957
star
77

MixedRealityToolkit-Unity

This repository is for the legacy Mixed Reality Toolkit (MRTK) v2. For the latest version of the MRTK please visit https://github.com/MixedRealityToolkit/MixedRealityToolkit-Unity
C#
5,943
star
78

fluentui-system-icons

Fluent System Icons are a collection of familiar, friendly and modern icons from Microsoft.
HTML
5,934
star
79

vscode-go

An extension for VS Code which provides support for the Go language. We have moved to https://github.com/golang/vscode-go
TypeScript
5,932
star
80

microsoft-ui-xaml

Windows UI Library: the latest Windows 10 native controls and Fluent styles for your applications
5,861
star
81

vscode-recipes

JavaScript
5,859
star
82

rushstack

Monorepo for tools developed by the Rush Stack community
TypeScript
5,840
star
83

MMdnn

MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.
Python
5,782
star
84

vscode-docs

Public documentation for Visual Studio Code
Markdown
5,650
star
85

ethr

Ethr is a Comprehensive Network Measurement Tool for TCP, UDP & ICMP.
Go
5,642
star
86

FASTER

Fast persistent recoverable log and key-value store + cache, in C# and C++.
C#
5,630
star
87

vscode-cpptools

Official repository for the Microsoft C/C++ extension for VS Code.
TypeScript
5,501
star
88

DirectX-Graphics-Samples

This repo contains the DirectX Graphics samples that demonstrate how to build graphics intensive applications on Windows.
C++
5,440
star
89

promptbase

All things prompt engineering
Python
5,367
star
90

BosqueLanguage

The Bosque programming language is an experiment in regularized design for a machine assisted rapid and reliable software development lifecycle.
TypeScript
5,282
star
91

TaskWeaver

A code-first agent framework for seamlessly planning and executing data analytics tasks.
Python
5,258
star
92

Detours

Detours is a software package for monitoring and instrumenting API calls on Windows. It is distributed in source code form.
C++
5,139
star
93

tsyringe

Lightweight dependency injection container for JavaScript/TypeScript
TypeScript
5,104
star
94

DeepSpeedExamples

Example models using DeepSpeed
Python
5,092
star
95

SynapseML

Simple and Distributed Machine Learning
Scala
5,041
star
96

Windows-classic-samples

This repo contains samples that demonstrate the API used in Windows classic desktop applications.
5,040
star
97

sudo

It's sudo, for Windows
Rust
4,998
star
98

TypeScript-Handbook

Deprecated, please use the TypeScript-Website repo instead
JavaScript
4,883
star
99

vscode-dev-containers

NOTE: Most of the contents of this repository have been migrated to the new devcontainers GitHub org (https://github.com/devcontainers). See https://github.com/devcontainers/template-starter and https://github.com/devcontainers/feature-starter for information on creating your own!
Shell
4,713
star
100

tsdoc

A doc comment standard for TypeScript
TypeScript
4,705
star