• Stars
    star
    1,299
  • Rank 36,217 (Top 0.8 %)
  • Language
    Python
  • License
    Other
  • Created over 1 year ago
  • Updated about 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.

Roboflow Inference banner

🎬 pip install inference

Roboflow Inference is the easiest way to use and deploy computer vision models. Inference supports running object detection, classification, instance segmentation, and even foundation models (like CLIP and SAM). You can train and deploy your own custom model or use one of the 50,000+ fine-tuned models shared by the community.

There are three primary inference interfaces:

🏃 Getting Started

Get up and running with inference on your local machine in 3 minutes.

pip install inference # or inference-gpu if you have CUDA

Setup your Roboflow Private API Key by exporting a ROBOFLOW_API_KEY environment variable or adding it to a .env file.

export ROBOFLOW_API_KEY=your_key_here

Run an open-source Rock, Paper, Scissors model on your webcam stream:

import inference

inference.Stream(
    source="webcam", # or rtsp stream or camera id
    model="rock-paper-scissors-sxsw/11", # from Universe

    on_prediction=lambda predictions, image: (
        print(predictions) # now hold up your hand: 🪨 📄 ✂️
    )
)

Note

Currently, the stream interface only supports object detection

Now let's extend the example to use Supervision to visualize the predictions and display them on screen with OpenCV:

import cv2
import inference
import supervision as sv

annotator = sv.BoxAnnotator()

inference.Stream(
    source="webcam", # or rtsp stream or camera id
    model="rock-paper-scissors-sxsw/11", # from Universe

    output_channel_order="BGR",
    use_main_thread=True, # for opencv display
    
    on_prediction=lambda predictions, image: (
        print(predictions), # now hold up your hand: 🪨 📄 ✂️
        
        cv2.imshow(
            "Prediction", 
            annotator.annotate(
                scene=image, 
                detections=sv.Detections.from_roboflow(predictions)
            )
        ),
        cv2.waitKey(1)
    )
)

👩‍🏫 More Examples

The /examples directory contains code samples for working with and extending inference including using foundation models like CLIP, HTTP and UDP clients, and an insights dashboard, along with community examples (PRs welcome)!

🎥 Inference in action

Check out Inference running on a video of a football game:

inference.mp4

💻 Why Inference?

Inference provides a scalable method through which you can manage inferences for your vision projects.

Inference is composed of:

  • Thousands of pre-trained community models that you can use as a starting point.

  • Foundation models like CLIP, SAM, and OCR.

  • A tight integration with Supervision.

  • An HTTP server, so you don’t have to reimplement things like image processing and prediction visualization on every project and you can scale your GPU infrastructure independently of your application code, and access your model from whatever language your app is written in.

  • Standardized APIs for computer vision tasks, so switching out the model weights and architecture can be done independently of your application code.

  • A model registry, so your code can be independent from your model weights & you don't have to re-build and re-deploy every time you want to iterate on your model weights.

  • Active Learning integrations, so you can collect more images of edge cases to improve your dataset & model the more it sees in the wild.

  • Seamless interoperability with Roboflow for creating datasets, training & deploying custom models.

And more!

📌 Use the Inference Server

You can learn more about Roboflow Inference Docker Image build, pull and run in our documentation.

  • Run on x86 CPU:
docker run -it --net=host roboflow/roboflow-inference-server-cpu:latest
  • Run on NVIDIA GPU:
docker run -it --network=host --gpus=all roboflow/roboflow-inference-server-gpu:latest
👉 more docker run options
  • Run on arm64 CPU:
docker run -p 9001:9001 roboflow/roboflow-inference-server-arm-cpu:latest
  • Run on NVIDIA Jetson with JetPack 4.x:
docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-jetson:latest
  • Run on NVIDIA Jetson with JetPack 5.x:
docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-jetson-5.1.1:latest

Extras:

Some functionality requires extra dependencies. These can be installed by specifying the desired extras during installation of Roboflow Inference.

extra description
clip Ability to use the core CLIP model (by OpenAI)
gaze Ability to use the core Gaze model
http Ability to run the http interface
sam Ability to run the core Segment Anything model (by Meta AI)

Note: Both CLIP and Segment Anything require pytorch to run. These are included in their respective dependencies however pytorch installs can be highly environment dependent. See the official pytorch install page for instructions specific to your enviornment.

Example install with CLIP dependencies:

pip install "inference[clip]"

Inference Client

To consume predictions from inference server in Python you can use the inference-sdk package.

pip install inference-sdk
from inference_sdk import InferenceHTTPClient

image_url = "https://media.roboflow.com/inference/soccer.jpg"

# Replace ROBOFLOW_API_KEY with your Roboflow API Key
client = InferenceHTTPClient(
    api_url="http://localhost:9001", # or https://detect.roboflow.com for Hosted API
    api_key="ROBOFLOW_API_KEY"
)
with client.use_model("soccer-players-5fuqs/1"):
    predictions = client.infer(image_url)

print(predictions)

Visit our documentation to discover capabilities of inference-clients library.

Single Image Inference

After installing inference via pip, you can run a simple inference on a single image (vs the video stream example above) by instantiating a model and using the infer method (don't forget to setup your ROBOFLOW_API_KEY environment variable or .env file):

from inference.models.utils import get_roboflow_model

model = get_roboflow_model(
    model_id="soccer-players-5fuqs/1"
)

# you can also infer on local images by passing a file path,
# a PIL image, or a numpy array
results = model.infer(
  image="https://media.roboflow.com/inference/soccer.jpg",
  confidence=0.5,
  iou_threshold=0.5
)

print(results)

Getting CLIP Embeddings

You can run inference with OpenAI's CLIP model using:

from inference.models import Clip

image_url = "https://media.roboflow.com/inference/soccer.jpg"

model = Clip()
embeddings = model.embed_image(image_url)

print(embeddings)

Using SAM

You can run inference with Meta's Segment Anything model using:

from inference.models import SegmentAnything

image_url = "https://media.roboflow.com/inference/soccer.jpg"

model = SegmentAnything()
embeddings = model.embed_image(image_url)

print(embeddings)

🏗️ inference Process

To standardize the inference process throughout all our models, Roboflow Inference has a structure for processing inference requests. The specifics can be found on each model's respective page, but overall it works like this for most models:

inference structure

✅ Supported Models

Load from Roboflow

You can use models hosted on Roboflow with the following architectures through Inference:

  • YOLOv5 Object Detection
  • YOLOv5 Instance Segmentation
  • YOLOv8 Object Detection
  • YOLOv8 Classification
  • YOLOv8 Segmentation
  • YOLACT Segmentation
  • ViT Classification

Core Models

Core Models are foundation models and models that have not been fine-tuned on a specific dataset.

The following core models are supported:

  1. CLIP
  2. L2CS (Gaze Detection)
  3. Segment Anything (SAM)

📝 License

The Roboflow Inference code is distributed under an Apache 2.0 license. The models supported by Roboflow Inference have their own licenses. View the licenses for supported models below.

model license
inference/models/clip MIT
inference/models/gaze MIT, Apache 2.0
inference/models/sam Apache 2.0
inference/models/vit Apache 2.0
inference/models/yolact MIT
inference/models/yolov5 AGPL-3.0
inference/models/yolov7 GPL-3.0
inference/models/yolov8 AGPL-3.0

Inference CLI

We've created a CLI tool with useful commands to make the inference usage easier. Check out docs.

🚀 Enterprise

With a Roboflow Inference Enterprise License, you can access additional Inference features, including:

  • Server cluster deployment
  • Device management
  • Active learning
  • YOLOv5 and YOLOv8 commercial license

To learn more, contact the Roboflow team.

📚 documentation

Visit our documentation for usage examples and reference for Roboflow Inference.

🏆 contribution

We would love your input to improve Roboflow Inference! Please see our contributing guide to get started. Thank you to all of our contributors! 🙏

💻 explore more Roboflow open source projects

Project Description
supervision General-purpose utilities for use in computer vision projects, from predictions filtering and display to object tracking to model evaluation.
Autodistill Automatically label images for use in training computer vision models.
Inference (this project) An easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
Notebooks Tutorials for computer vision tasks, from training state-of-the-art models to tracking objects to counting objects in a zone.
Collect Automated, intelligent data collection powered by CLIP.

More Repositories

1

supervision

We write your reusable computer vision tools. 💜
Python
22,657
star
2

notebooks

Examples and tutorials on using SOTA computer vision models and techniques. Learn everything from old-school ResNet, through YOLO and object-detection transformers like DETR, to the latest models like Grounding DINO and SAM.
Jupyter Notebook
5,261
star
3

sports

computer vision and sports
Python
2,310
star
4

awesome-openai-vision-api-experiments

Must-have resource for anyone who wants to experiment with and build on the OpenAI vision API 🔥
Python
1,633
star
5

maestro

streamline the fine-tuning process for multimodal models: PaliGemma, Florence-2, and Qwen2-VL
Python
1,328
star
6

roboflow-python

The official Roboflow Python package. Manage your datasets, models, and deployments. Roboflow has everything you need to build a computer vision application.
Python
289
star
7

webcamGPT

webcamGPT - chat with video stream 💬 + 📸
Python
253
star
8

roboflow-100-benchmark

Code for replicating Roboflow 100 benchmark results and programmatically downloading benchmark datasets
Jupyter Notebook
244
star
9

dji-aerial-georeferencing

Detect objects in drone videos and plot them on a map
JavaScript
198
star
10

neuralhash-collisions

A catalog of naturally occurring images whose Apple NeuralHash is identical.
JavaScript
151
star
11

template-python

A template repo holding our common setup for a python project
Python
90
star
12

video-inference

Example showing how to do inference on a video file with Roboflow Infer
Shell
48
star
13

polygonzone

A web utility to draw polygons and retrieve their coordinates for computer vision applications.
JavaScript
45
star
14

model-leaderboard

Which model is the best at object detection? Which is best for small or large objects? We compare the results in a handy leaderboard.
JavaScript
40
star
15

auto-annotate

A simple tool for automatic image annotation using Roboflow API
Python
40
star
16

homepage-demo

Build an in-browser model experience like the one on the Roboflow homepage.
JavaScript
36
star
17

blackjack-basic-strategy

A computer vision powered Blackjack basic strategy app powered by Roboflow.
JavaScript
36
star
18

roboflow-computer-vision-utilities

Interface with the Roboflow API and Python package for running inference (receiving predictions) and customizing result images from your Roboflow Train computer vision models.
Python
36
star
19

cvevals

Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, models hosted on Roboflow)
Python
34
star
20

gpt-checkup

Monitor the performance of OpenAI's GPT-4V model over time.
HTML
31
star
21

roboflow-collect

Passively collect images for computer vision datasets on the edge.
Python
27
star
22

deploy-models-with-grpc-pytorch-asyncio

Article about deploying machine learning models using grpc, pytorch and asyncio
Python
24
star
23

RoboflowExpoExample

Java
23
star
24

quickstart-python

Start using computer vision in two minutes with our interactive Python notebook experience.
Jupyter Notebook
23
star
25

clip_video_app

Flask-based web application designed to compare text and image embeddings using the CLIP model.
Python
21
star
26

supashim

Use Supabase as a drop-in replacement for Firebase
JavaScript
20
star
27

roboflow-api-snippets

repo for versioning snippets that show how to use Roboflow APIs
Python
18
star
28

rabbit-deterrence

Uses computer vision to deter rabbits from eating your vegetables
Python
17
star
29

cookbooks

Templates for computer vision projects, referenced in Roboflow blog posts.
Python
16
star
30

roboflow-ios-starter

Official starter project for building iOS apps with Roboflow.
Swift
14
star
31

cog-vlm-client

Simple CogVLM client script
Python
14
star
32

rickblocker

Audio visual mitigation of Rickrolls using computer vision.
JavaScript
14
star
33

inference-client

Python
13
star
34

inference-server-old

Object detection inference with Roboflow Train models on NVIDIA Jetson devices.
JavaScript
13
star
35

magic-scissors

Synthetic data for object detection and segmentation
Python
11
star
36

streamlit-web-app

A web-based application for testing models trained with Roboflow. Powered by Streamlit.
Python
9
star
37

OBS-Controller

This is a public repo for the Roboflow OBS Gesture Controller. The gesture controller currently responds to four gestures, "Up", "Down", "Stop", and "Grab". Performing these gestures will allow you to transition scenes and grab source objects inside of OBS.
TypeScript
9
star
38

roboflow-react-app

react starter app for roboflow inference
JavaScript
8
star
39

roboflow-nest

Using Roboflow with the Nest camera API
JavaScript
8
star
40

yolov5-custom-training-tutorial

Jupyter Notebook
8
star
41

inference-dashboard-example

Roboflow's inference server to analyze video streams. This project extracts insights from video frames at defined intervals and generates informative visualizations and CSV outputs.
Python
8
star
42

roboflow-100-3d-website

roboflow-100-3d-website
JavaScript
6
star
43

yolov8-OpenVINO

Deploy a YOLOv8 model (ONNX format) to an Amazon SageMaker endpoint for serving inference requests using ONNXRuntime
Jupyter Notebook
6
star
44

roboflow-swift

Swift
5
star
45

roboflow-node

Roboflow CLI and API module for node
JavaScript
5
star
46

roboflow-cli

Command Line Interface for Roboflow
JavaScript
5
star
47

roboflow-jetson-license-plate

Mashup Roboflow Object Detection with OCR to read license plates.
Python
5
star
48

stable-diffusion-demo

Generating 1k images using Stable Diffusion and uploading them into your Roboflow project
Jupyter Notebook
5
star
49

scavenger-hunt

Roboflow SXSW Scavenger Hunt game.
JavaScript
5
star
50

supervision-annotators-hf-space

Demo of Annotators through Gradio
Python
5
star
51

foundation-vision-benchmark

A qualitative set of tests for use in evaluating the capabilities of foundation vision models.
4
star
52

streamlit-bccd

Streamlit App for Blood Cell Count Dataset
Python
4
star
53

cheatsheet-supervision

Supervision cheatsheet website, coded up in Svelte
Svelte
4
star
54

trt-demos

This is a repo for Roboflow TFT python examples.
Python
3
star
55

roboflow-object-counting

Interface with the Roboflow API and Python package for object counting in your computer vision models.
Jupyter Notebook
3
star
56

roboflow-swift-examples

Swift
3
star
57

model-library

3
star
58

roboflow-red

A visual way to interact with computer vision using Node-RED
JavaScript
3
star
59

synthetic-fruit-dataset

Code for Roboflow's How to Create a Synthetic Dataset tutorial.
JavaScript
3
star
60

visual-prompting

TypeScript
2
star
61

fast-ai-resnet32

Jupyter Notebook
2
star
62

c3-sapphire-rapids

Jupyter Notebook
2
star
63

inferencejs-react-example

JavaScript
2
star
64

roboflow-object-tracking

Python
1
star
65

smooth-frame

Python
1
star
66

tao-toolkit-with-roboflow

Jupyter Notebook
1
star
67

clip-benchmark

Python
1
star
68

ODinW-RF100-challenge-issues

ODinW RF100 📸 challenge issues/discussions repository
1
star
69

yolov8-website

Source code for the yolov8.com website.
CSS
1
star
70

external-bugtracker

1
star
71

stacked-boxes-email-notification

A small project demonstrating how Roboflow's Inference APIs can be used to trigger email notifications.
Python
1
star
72

server-benchmark

A script you can use to benchmark the Roboflow Deploy targets with your custom trained model on your hardware.
JavaScript
1
star
73

lenny

Lenny uses 500+ blog posts, 100+ docs pages, and Roboflow developer documentation to answer questions about computer vision and Roboflow.
HTML
1
star