• Stars
    star
    12,150
  • Rank 2,540 (Top 0.06 %)
  • Language
    Python
  • License
    Other
  • Created over 5 years ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

๐Ÿ„ Scalable embedding, reasoning, ranking for images and sentences with CLIP

CLIP-as-service logo: The data structure for unstructured data


PyPI Codecov branch Host on Google Colab with GPU/TPU support

CLIP-as-service is a low-latency high-scalability service for embedding images and text. It can be easily integrated as a microservice into neural search solutions.

โšก Fast: Serve CLIP models with TensorRT, ONNX runtime and PyTorch w/o JIT with 800QPS[*]. Non-blocking duplex streaming on requests and responses, designed for large data and long-running tasks.

๐Ÿซ Elastic: Horizontally scale up and down multiple CLIP models on single GPU, with automatic load balancing.

๐Ÿฅ Easy-to-use: No learning curve, minimalist design on client and server. Intuitive and consistent API for image and sentence embedding.

๐Ÿ‘’ Modern: Async client support. Easily switch between gRPC, HTTP, WebSocket protocols with TLS and compression.

๐Ÿฑ Integration: Smooth integration with neural search ecosystem including Jina and DocArray. Build cross-modal and multi-modal solutions in no time.

[*] with default config (single replica, PyTorch no JIT) on GeForce RTX 3090.

Try it!

An always-online server api.clip.jina.ai loaded with ViT-L-14-336::openai is there for you to play & test. Before you start, make sure you have obtained a personal access token from the Jina AI Cloud, or via CLI as described in this guide:

jina auth token create <name of PAT> -e <expiration days>

Then, you need to configure the access token in the parameter credential of the client in python or set it in the HTTP request header Authorization as <your access token>.

โš ๏ธ Our demo server demo-cas.jina.ai is sunset and no longer available after 15th of Sept 2022.

Text & image embedding

via HTTPS ๐Ÿ” via gRPC ๐Ÿ”โšกโšก
curl \
-X POST https://api.clip.jina.ai:8443/post \
-H 'Content-Type: application/json' \
-H 'Authorization: <your access token>' \
-d '{"data":[{"text": "First do it"}, 
    {"text": "then do it right"}, 
    {"text": "then do it better"}, 
    {"uri": "https://picsum.photos/200"}], 
    "execEndpoint":"/"}'
# pip install clip-client
from clip_client import Client

c = Client(
    'grpcs://api.clip.jina.ai:2096', credential={'Authorization': '<your access token>'}
)

r = c.encode(
    [
        'First do it',
        'then do it right',
        'then do it better',
        'https://picsum.photos/200',
    ]
)
print(r)

Visual reasoning

There are four basic visual reasoning skills: object recognition, object counting, color recognition, and spatial relation understanding. Let's try some:

You need to install jq (a JSON processor) to prettify the results.

Image via HTTPS ๐Ÿ”
curl \
-X POST https://api.clip.jina.ai:8443/post \
-H 'Content-Type: application/json' \
-H 'Authorization: <your access token>' \
-d '{"data":[{"uri": "https://picsum.photos/id/1/300/300",
"matches": [{"text": "there is a woman in the photo"},
            {"text": "there is a man in the photo"}]}],
            "execEndpoint":"/rank"}' \
| jq ".data[].matches[] | (.text, .scores.clip_score.value)"

gives:

"there is a woman in the photo"
0.626907229423523
"there is a man in the photo"
0.37309277057647705
curl \
-X POST https://api.clip.jina.ai:8443/post \
-H 'Content-Type: application/json' \
-H 'Authorization: <your access token>' \
-d '{"data":[{"uri": "https://picsum.photos/id/133/300/300",
"matches": [
{"text": "the blue car is on the left, the red car is on the right"},
{"text": "the blue car is on the right, the red car is on the left"},
{"text": "the blue car is on top of the red car"},
{"text": "the blue car is below the red car"}]}],
"execEndpoint":"/rank"}' \
| jq ".data[].matches[] | (.text, .scores.clip_score.value)"

gives:

"the blue car is on the left, the red car is on the right"
0.5232442617416382
"the blue car is on the right, the red car is on the left"
0.32878655195236206
"the blue car is below the red car"
0.11064132302999496
"the blue car is on top of the red car"
0.03732786327600479
curl \
-X POST https://api.clip.jina.ai:8443/post \
-H 'Content-Type: application/json' \
-H 'Authorization: <your access token>' \
-d '{"data":[{"uri": "https://picsum.photos/id/102/300/300",
"matches": [{"text": "this is a photo of one berry"},
            {"text": "this is a photo of two berries"},
            {"text": "this is a photo of three berries"},
            {"text": "this is a photo of four berries"},
            {"text": "this is a photo of five berries"},
            {"text": "this is a photo of six berries"}]}],
            "execEndpoint":"/rank"}' \
| jq ".data[].matches[] | (.text, .scores.clip_score.value)"

gives:

"this is a photo of three berries"
0.48507222533226013
"this is a photo of four berries"
0.2377079576253891
"this is a photo of one berry"
0.11304923892021179
"this is a photo of five berries"
0.0731358453631401
"this is a photo of two berries"
0.05045759305357933
"this is a photo of six berries"
0.04057715833187103

Documentation

Install

CLIP-as-service consists of two Python packages clip-server and clip-client that can be installed independently. Both require Python 3.7+.

Install server

Pytorch Runtime โšก ONNX Runtime โšกโšก TensorRT Runtime โšกโšกโšก
pip install clip-server
pip install "clip-server[onnx]"
pip install nvidia-pyindex 
pip install "clip-server[tensorrt]"

You can also host the server on Google Colab, leveraging its free GPU/TPU.

Install client

pip install clip-client

Quick check

You can run a simple connectivity check after install.

C/S Command Expect output
Server
python -m clip_server
Expected server output
Client
from clip_client import Client

c = Client('grpc://0.0.0.0:23456')
c.profile()
Expected clip-client output

You can change 0.0.0.0 to the intranet or public IP address to test the connectivity over private and public network.

Get Started

Basic usage

  1. Start the server: python -m clip_server. Remember its address and port.
  2. Create a client:
     from clip_client import Client
    
     c = Client('grpc://0.0.0.0:51000')
  3. To get sentence embedding:
    r = c.encode(['First do it', 'then do it right', 'then do it better'])
    
    print(r.shape)  # [3, 512] 
  4. To get image embedding:
    r = c.encode(['apple.png',  # local image 
                  'https://clip-as-service.jina.ai/_static/favicon.png',  # remote image
                  'data:image/gif;base64,R0lGODlhEAAQAMQAAORHHOVSKudfOulrSOp3WOyDZu6QdvCchPGolfO0o/XBs/fNwfjZ0frl3/zy7////wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAkAABAALAAAAAAQABAAAAVVICSOZGlCQAosJ6mu7fiyZeKqNKToQGDsM8hBADgUXoGAiqhSvp5QAnQKGIgUhwFUYLCVDFCrKUE1lBavAViFIDlTImbKC5Gm2hB0SlBCBMQiB0UjIQA7'])  # in image URI
    
    print(r.shape)  # [3, 512]

More comprehensive server and client user guides can be found in the docs.

Text-to-image cross-modal search in 10 lines

Let's build a text-to-image search using CLIP-as-service. Namely, a user can input a sentence and the program returns matching images. We'll use the Totally Looks Like dataset and DocArray package. Note that DocArray is included within clip-client as an upstream dependency, so you don't need to install it separately.

Load images

First we load images. You can simply pull them from Jina Cloud:

from docarray import DocumentArray

da = DocumentArray.pull('ttl-original', show_progress=True, local_cache=True)
or download TTL dataset, unzip, load manually

Alternatively, you can go to Totally Looks Like official website, unzip and load images:

from docarray import DocumentArray

da = DocumentArray.from_files(['left/*.jpg', 'right/*.jpg'])

The dataset contains 12,032 images, so it may take a while to pull. Once done, you can visualize it and get the first taste of those images:

da.plot_image_sprites()

Visualization of the image sprite of Totally looks like dataset

Encode images

Start the server with python -m clip_server. Let's say it's at 0.0.0.0:51000 with GRPC protocol (you will get this information after running the server).

Create a Python client script:

from clip_client import Client

c = Client(server='grpc://0.0.0.0:51000')

da = c.encode(da, show_progress=True)

Depending on your GPU and client-server network, it may take a while to embed 12K images. In my case, it took about two minutes.

Download the pre-encoded dataset

If you're impatient or don't have a GPU, waiting can be Hell. In this case, you can simply pull our pre-encoded image dataset:

from docarray import DocumentArray

da = DocumentArray.pull('ttl-embedding', show_progress=True, local_cache=True)

Search via sentence

Let's build a simple prompt to allow a user to type sentence:

while True:
    vec = c.encode([input('sentence> ')])
    r = da.find(query=vec, limit=9)
    r[0].plot_image_sprites()

Showcase

Now you can input arbitrary English sentences and view the top-9 matching images. Search is fast and instinctive. Let's have some fun:

"a happy potato" "a super evil AI" "a guy enjoying his burger"

Visualization of the image sprite of Totally looks like dataset

Visualization of the image sprite of Totally looks like dataset

Visualization of the image sprite of Totally looks like dataset

"professor cat is very serious" "an ego engineer lives with parent" "there will be no tomorrow so lets eat unhealthy"

Visualization of the image sprite of Totally looks like dataset

Visualization of the image sprite of Totally looks like dataset

Visualization of the image sprite of Totally looks like dataset

Let's save the embedding result for our next example:

da.save_binary('ttl-image')

Image-to-text cross-modal search in 10 Lines

We can also switch the input and output of the last program to achieve image-to-text search. Precisely, given a query image find the sentence that best describes the image.

Let's use all sentences from the book "Pride and Prejudice".

from docarray import Document, DocumentArray

d = Document(uri='https://www.gutenberg.org/files/1342/1342-0.txt').load_uri_to_text()
da = DocumentArray(
    Document(text=s.strip()) for s in d.text.replace('\r\n', '').split('.') if s.strip()
)

Let's look at what we got:

da.summary()
            Documents Summary            
                                         
  Length                 6403            
  Homogenous Documents   True            
  Common Attributes      ('id', 'text')  
                                         
                     Attributes Summary                     
                                                            
  Attribute   Data type   #Unique values   Has empty value  
 โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ 
  id          ('str',)    6403             False            
  text        ('str',)    6030             False            

Encode sentences

Now encode these 6,403 sentences, it may take 10 seconds or less depending on your GPU and network:

from clip_client import Client

c = Client('grpc://0.0.0.0:51000')

r = c.encode(da, show_progress=True)
Download the pre-encoded dataset

Again, for people who are impatient or don't have a GPU, we have prepared a pre-encoded text dataset:

from docarray import DocumentArray

da = DocumentArray.pull('ttl-textual', show_progress=True, local_cache=True)

Search via image

Let's load our previously stored image embedding, randomly sample 10 image Documents, then find top-1 nearest neighbour of each.

from docarray import DocumentArray

img_da = DocumentArray.load_binary('ttl-image')

for d in img_da.sample(10):
    print(da.find(d.embedding, limit=1)[0].text)

Showcase

Fun time! Note, unlike the previous example, here the input is an image and the sentence is the output. All sentences come from the book "Pride and Prejudice".

Visualization of the image sprite of Totally looks like dataset

Visualization of the image sprite of Totally looks like dataset

Visualization of the image sprite of Totally looks like dataset

Visualization of the image sprite of Totally looks like dataset

Visualization of the image sprite of Totally looks like dataset

Besides, there was truth in his looks Gardiner smiled whatโ€™s his name By tea time, however, the dose had been enough, and Mr You do not look well

Visualization of the image sprite of Totally looks like dataset

Visualization of the image sprite of Totally looks like dataset

Visualization of the image sprite of Totally looks like dataset

Visualization of the image sprite of Totally looks like dataset

Visualization of the image sprite of Totally looks like dataset

โ€œA gamester!โ€ she cried If you mention my name at the Bell, you will be attended to Never mind Miss Lizzyโ€™s hair Elizabeth will soon be the wife of Mr I saw them the night before last

Rank image-text matches via CLIP model

From 0.3.0 CLIP-as-service adds a new /rank endpoint that re-ranks cross-modal matches according to their joint likelihood in CLIP model. For example, given an image Document with some predefined sentence matches as below:

from clip_client import Client
from docarray import Document

c = Client(server='grpc://0.0.0.0:51000')
r = c.rank(
    [
        Document(
            uri='.github/README-img/rerank.png',
            matches=[
                Document(text=f'a photo of a {p}')
                for p in (
                    'control room',
                    'lecture room',
                    'conference room',
                    'podium indoor',
                    'television studio',
                )
            ],
        )
    ]
)

print(r['@m', ['text', 'scores__clip_score__value']])
[['a photo of a television studio', 'a photo of a conference room', 'a photo of a lecture room', 'a photo of a control room', 'a photo of a podium indoor'], 
[0.9920725226402283, 0.006038925610482693, 0.0009973491542041302, 0.00078492151806131, 0.00010626466246321797]]

One can see now a photo of a television studio is ranked to the top with clip_score score at 0.992. In practice, one can use this endpoint to re-rank the matching result from another search system, for improving the cross-modal search quality.

Rerank endpoint image input Rerank endpoint output

Rank text-image matches via CLIP model

In the DALLยทE Flow project, CLIP is called for ranking the generated results from DALLยทE. It has an Executor wrapped on top of clip-client, which calls .arank() - the async version of .rank():

from clip_client import Client
from jina import Executor, requests, DocumentArray


class ReRank(Executor):
    def __init__(self, clip_server: str, **kwargs):
        super().__init__(**kwargs)
        self._client = Client(server=clip_server)

    @requests(on='/')
    async def rerank(self, docs: DocumentArray, **kwargs):
        return await self._client.arank(docs)

CLIP-as-service used in DALLE Flow

Intrigued? That's only scratching the surface of what CLIP-as-service is capable of. Read our docs to learn more.

Support

Join Us

CLIP-as-service is backed by Jina AI and licensed under Apache-2.0. We are actively hiring AI engineers, solution engineers to build the next neural search ecosystem in open-source.

More Repositories

1

jina

โ˜๏ธ Build multimodal AI applications with cloud-native stack
Python
19,690
star
2

reader

Convert any URL to an LLM-friendly input with a simple prefix https://r.jina.ai/
TypeScript
3,126
star
3

dalle-flow

๐ŸŒŠ A Human-in-the-Loop workflow for creating HD images from text
Python
2,826
star
4

dev-gpt

Your Virtual Development Team
Python
1,658
star
5

langchain-serve

โšก Langchain apps in production using Jina & FastAPI
Python
1,573
star
6

finetuner

๐ŸŽฏ Task-oriented embedding tuning for BERT, CLIP, etc.
Python
1,402
star
7

thinkgpt

Agent techniques to augment your LLM and push it beyong its limits
Python
1,402
star
8

auto-gpt-web

Set Your Goals, AI Achieves Them.
TypeScript
743
star
9

agentchain

Chain together LLMs for reasoning & orchestrate multiple large models for accomplishing complex tasks
Python
557
star
10

docarray

The data structure for unstructured data
Python
522
star
11

vectordb

A Python vector database you just need - no more, no less.
Python
463
star
12

jcloud

Simplify deploying and managing Jina projects on Jina Cloud
Python
294
star
13

jina-video-chat

Python
266
star
14

jinabox.js

A lightweight, customizable omnibox in Javascript, for use with a Jina backend.
JavaScript
219
star
15

annlite

โšก A fast embedded library for approximate nearest neighbor search
Python
212
star
16

fastapi-serve

FastAPI to the Cloud, Batteries Included! โ˜๏ธ๐Ÿ”‹๐Ÿš€
Python
139
star
17

rungpt

An open-source cloud-native of large multi-modal models (LMMs) serving framework.
Python
134
star
18

jina-hub

An open-registry for hosting Jina executors via container images
Python
103
star
19

dashboard

Interactive UI for analyzing Jina logs, designing Flows and viewing Hub images
TypeScript
100
star
20

GoldRetriever

Create and host retrieval plugins for ChatGPT in one click
Python
61
star
21

jinaai-py

Python
44
star
22

example-multimodal-fashion-search

Input text or image, get back matching image fashion results, using Jina, DocArray, and CLIP
Python
43
star
23

docs

Jina V1 Official Documentation. For the latest one, please check out https://docs.jina.ai
HTML
34
star
24

streamlit-jina

Streamlit component for Jina neural search
Python
34
star
25

executors

internal-only
Python
28
star
26

jerboa

LLM finetuning
Python
27
star
27

jinaai-js

TypeScript
27
star
28

jina-ai.github.io

Homepage of Jina AI Limited
HTML
26
star
29

example-meme-search

Meme search engine built with Jina neural search framework. Search with captions or image files to find matching memes.
Python
23
star
30

example-app-store

App store search example, using Jina as backend and Streamlit as frontend
Python
21
star
31

docsQA-ui

Web UI for docsQA. Main branch: https://jina-docqa-ui.netlify.app/
TypeScript
20
star
32

example-speech-to-image

An example of building a speech to image generation pipeline with Jina, Whisper and StableDiffusion
Python
20
star
33

jina-hubble-sdk

Python API for authentication, resource management with Hubble
Python
19
star
34

product-recommendation-redis-docarray

Python
18
star
35

career

Find out job opportunities at Jina AI
17
star
36

executor-3d-encoder

An executor that wraps 3D mesh models and encodes 3D content documents to d-dimension vector.
Python
16
star
37

client-go

Golang Client for Jina (https://github.com/jina-ai/jina)
Go
16
star
38

workshops

Jupyter Notebook
14
star
39

benchmark

Benchmark environment and results of different versions of Jina.
Python
14
star
40

action-hub-builder

Simple interface for building & validating Jina Hub executors.
Python
12
star
41

inference-client

Python
12
star
42

executor-hnsw-postgres

A production-ready, scalable Indexer for the Jina neural search framework, based on HNSW and PSQL
Python
12
star
43

now

Python
11
star
44

cookiecutter-jina

Cookiecutter template for a Jina project
Python
10
star
45

simple-jina-examples

Python
9
star
46

executor-simpleindexer

Simple Indexer
Python
9
star
47

cloud-ops

Python
8
star
48

good-first-issues

Issues that don't fit under Jina's other repos!
8
star
49

executor-clip-encoder

Encoder that embeds documents using either the CLIP vision encoder or the CLIP text encoder, depending on the content type of the document.
Python
8
star
50

api

API schema of Jina command line interface exposed as JSON and YAML files.
HTML
8
star
51

inference-client-js

TypeScript
7
star
52

executor-text-transformers-dprreader-ranker

DPRReaderRanker
Python
7
star
53

executor-video-loader

Python
7
star
54

executor-image-clip-encoder

CLIPImageEncoder is an image encoder that wraps the image embedding functionality using the CLIP
Python
7
star
55

.github

This repository stores github actions templates as described https://docs.github.com/en/actions/learn-github-actions/sharing-workflows-with-your-organization
7
star
56

GSoC

Google Summer of Code
7
star
57

example-wikipedia-recommendation

An example of graph embeddings for wikipedia page recommendations
Jupyter Notebook
6
star
58

executor-U100KIndexer

An Indexer that works out-of-the-box when you have less than 100K stored Documents
Python
6
star
59

devrel-heartmaker

Heart mosaics of your GitHub contributors
Python
6
star
60

executor-text-transformers-torch-encoder

**TransformerTorchEncoder** wraps the torch-version of transformers from huggingface. It encodes text data into dense vectors.
Python
6
star
61

executor-cases

Summarize all Executor patterns for Hubble
Python
5
star
62

executor-normalizer

Jina executor package normalizer
Python
5
star
63

auth

deprecated, use `jina-hubble-sdk`
Python
5
star
64

jina-commons

A collection of shared function for Jina Executor
Python
5
star
65

tutorial-notebooks

Jupyter Notebook
5
star
66

jina-paddle-hackathon

ๆž็บณ x ็™พๅบฆ้ฃžๆกจ ้ป‘ๅฎข้ฉฌๆ‹‰ๆพ
Python
5
star
67

executor-image-preprocessor

An executor that performs standard pre-processing and normalization on images.
Python
5
star
68

jina-hackathon

Support repo for Jina X Hackathon - Sep 2020
5
star
69

executor-featurehasher

FeatureHasher
Python
4
star
70

stress-test

A collection of stress tests of Jina infrastructure
Python
4
star
71

executor-image-clip-classifier

Python
4
star
72

executor-text-transformerqa

**TransformerQAExecutor* wraps a question-answering model from huggingface and return relevant answers given questions and contexts/paragraphs.
Python
4
star
73

hub-integration

Integration test for hub
Python
4
star
74

executor-faissindexer

A similarity search indexer based on Faiss. https://hub.jina.ai/executor/8gsd0tts
Python
4
star
75

example-audio-search

Python
3
star
76

example-video-qa

This is an example of building a video QA with jina
TypeScript
3
star
77

jinad

Management of Jina on remote
Python
3
star
78

executor-indexers

Indexer Executors for Jina
Python
3
star
79

executor-text-dpr-encoder

Encode text into embeddings using the DPR model.
Python
3
star
80

jina-sagemaker

Jina Embedding Models on AWS SageMaker
Jupyter Notebook
3
star
81

executor-clip-image

Executor for the pre-trained clip model. https://openai.com/blog/clip/
Python
3
star
82

executor-weaviate-indexer

Python
3
star
83

executor-doc2query

Python
3
star
84

executor-evaluator-ranking

Python
3
star
85

legacy-examples

Unmaintained examples for Jina
Python
3
star
86

executor-image-paddle-encoder

Python
3
star
87

jupyter-notebooks

Jupyter Notebook
3
star
88

executor-yolov5

Python
3
star
89

executor-lightgbm-ranker

Python
3
star
90

terraform-jina-jinad-aws

Module for deploying JinaD on AWS
HCL
3
star
91

encoder-image-torch

The ImageTorchEncoder encodes Document content from a ndarray to an d-dimensional vector.
Python
3
star
92

executor-image-niireader

Python
2
star
93

example-odqa

Roff
2
star
94

jina-ui

Monorepo for JinaJS and frontend projects
TypeScript
2
star
95

executor-audio-clip-encoder

Wraps the AudioCLIP model for generating embeddings for audio data for the Jina framework
Python
2
star
96

executor-text-clip-encoder

Encode text into embeddings using the CLIP model.
Python
2
star
97

executor-image-normalizer

Executor that reads, resizes, crops and normalizes images.
Python
2
star
98

executor-vgg-audio-encoder

Python
2
star
99

executor-image-hasher

An executor to encode images using comparable hashing techniques. Useful for duplicate detection
Python
2
star
100

executor-image-clothing-segmenter

An executor that performs image segmentation on fashion items
Python
2
star