• Stars
    star
    186
  • Rank 207,316 (Top 5 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created almost 5 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

This is a repository for an object detection inference API using the Tensorflow framework.

Tensorflow CPU Inference API For Windows and Linux

This is a repository for an object detection inference API using the Tensorflow framework.

This repo is based on Tensorflow Object Detection API.

The Tensorflow version used is 1.13.1. The inference REST API works on CPU and doesn't require any GPU usage. It's supported on both Windows and Linux Operating systems.

Models trained using our training tensorflow repository can be deployed in this API. Several object detection models can be loaded and used at the same time. This repo also offers optical character recognition services to extract textboxes from images.

This repo can be deployed using either docker or docker swarm.

Please use docker swarm only if you need to:

  • Provide redundancy in terms of API containers: In case a container went down, the incoming requests will be redirected to another running instance.

  • Coordinate between the containers: Swarm will orchestrate between the APIs and choose one of them to listen to the incoming request.

  • Scale up the Inference service in order to get a faster prediction especially if there's traffic on the service.

If none of the aforementioned requirements are needed, simply use docker.

predict image

Prerequisites

  • OS:
    • Ubuntu 16.04/18.04
    • Windows 10 pro/enterprise
  • Docker

Check for prerequisites

To check if you have docker-ce installed:

docker --version

Install prerequisites

Ubuntu

Use the following command to install docker on Ubuntu:

chmod +x install_prerequisites.sh && source install_prerequisites.sh

Windows 10

To install Docker on Windows, please follow the link.

P.S: For Windows users, open the Docker Desktop menu by clicking the Docker Icon in the Notifications area. Select Settings, and then Advanced tab to adjust the resources available to Docker Engine.

Build The Docker Image

In order to build the project run the following command from the project's root directory:

sudo docker build -t tensorflow_inference_api_cpu -f docker/dockerfile .

Behind a proxy

sudo docker build --build-arg http_proxy='' --build-arg https_proxy='' -t tensorflow_inference_api_cpu -f ./docker/dockerfile .

Run the docker container

As mentioned before, this container can be deployed using either docker or docker swarm.

If you wish to deploy this API using docker, please issue the following run command.

If you wish to deploy this API using docker swarm, please refer to following link docker swarm documentation. After deploying the API with docker swarm, please consider returning to this documentation for further information about the API endpoints as well as the model structure sections.

To run the API, go the to the API's directory and run the following:

Using Linux based docker:

sudo docker run -itv $(pwd)/models:/models -v $(pwd)/models_hash:/models_hash -p <docker_host_port>:4343 tensorflow_inference_api_cpu

Using Windows based docker:

docker run -itv ${PWD}/models:/models -v ${PWD}/models_hash:/models_hash -p <docker_host_port>:4343 tensorflow_inference_api_cpu

The <docker_host_port> can be any unique port of your choice.

The API file will be run automatically, and the service will listen to http requests on the chosen port.

API Endpoints

To see all available endpoints, open your favorite browser and navigate to:

http://<machine_IP>:<docker_host_port>/docs

The 'predict_batch' endpoint is not shown on swagger. The list of files input is not yet supported.

P.S: If you are using custom endpoints like /load, /detect, and /get_labels, you should always use the /load endpoint first and then use /detect or /get_labels

Endpoints summary

/load (GET)

Loads all available models and returns every model with it's hashed value. Loaded models are stored and aren't loaded again

load model

/detect (POST)

Performs inference on specified model, image, and returns bounding-boxes

detect image

/get_labels (POST)

Returns all of the specified model labels with their hashed values

get model labels

/models/{model_name}/predict_image (POST)

Performs inference on specified model, image, draws bounding boxes on the image, and returns the actual image as response

predict image

/models (GET)

Lists all available models

/models/{model_name}/load (GET)

Loads the specified model. Loaded models are stored and aren't loaded again

/models/{model_name}/predict (POST)

Performs inference on specified model, image, and returns bounding boxes.

/models/{model_name}/labels (GET)

Returns all of the specified model labels

/models/{model_name}/config (GET)

Returns the specified model's configuration

/models/{model_name}/predict_batch (POST)

Performs inference on specified model and a list of images, and returns bounding boxes

/models/{model_name}/one_shot_ocr (POST)

Takes an image and returns extracted text details. In first place a detection model will be used for cropping interesting areas in the uploaded image. Then, these areas will be passed to the OCR-Service for text extraction.

/models/{model_name}/ocr (POST)

predict image

Takes an image and returns extracted text details without using an object detection model

P.S: Custom endpoints like /load, /detect, /get_labels and /one_shot_ocr should be used in a chronological order. First you have to call /load, and then call /detect, /get_labels or /one_shot_ocr

Model structure

The folder "models" contains subfolders of all the models to be loaded. Inside each subfolder there should be a:

  • pb file (frozen_inference_graph.pb): contains the model weights

  • pbtxt file (object-detection.pbtxt): contains model classes

  • Config.json (This is a json file containing information about the model)

      {
          "inference_engine_name": "tensorflow_detection",
          "confidence": 60,
          "predictions": 15,
          "number_of_classes": 2,
          "framework": "tensorflow",
          "type": "detection",
          "network": "inception"
      }

    P.S:

    • You can change confidence and predictions values while running the API
    • The API will return bounding boxes with a confidence higher than the "confidence" value. A high "confidence" can show you only accurate predictions
    • The "predictions" value specifies the maximum number of bounding boxes in the API response

Benchmarking

Windows Ubuntu
Network\Hardware Intel Xeon CPU 2.3 GHz Intel Xeon CPU 2.3 GHz Intel Xeon CPU 3.60 GHz GeForce GTX 1080
ssd_fpn 0.867 seconds/image 1.016 seconds/image 0.434 seconds/image 0.0658 seconds/image
frcnn_resnet_50 4.029 seconds/image 4.219 seconds/image 1.994 seconds/image 0.148 seconds/image
ssd_mobilenet 0.055 seconds/image 0.106 seconds/image 0.051 seconds/image 0.052 seconds/image
frcnn_resnet_101 4.469 seconds/image 4.985 seconds/image 2.254 seconds/image 0.364 seconds/image
ssd_resnet_50 1.34 seconds/image 1.462 seconds/image 0.668 seconds/image 0.091 seconds/image
ssd_inception 0.094 seconds/image 0.15 seconds/image 0.074 seconds/image 0.0513 seconds/image

Acknowledgment

inmind.ai

robotron.de

Joe Sleiman, inmind.ai , Beirut, Lebanon

Antoine Charbel, inmind.ai, Beirut, Lebanon

Anis Ismail, Beirut, Lebanon

More Repositories

1

BMW-TensorFlow-Training-GUI

This repository allows you to get started with a gui based training a State-of-the-art Deep Learning model with little to no configuration needed! NoCode training with TensorFlow has never been so easy.
Python
951
star
2

BMW-YOLOv4-Training-Automation

This repository allows you to get started with training a state-of-the-art Deep Learning model with little to no configuration needed! You provide your labeled dataset or label your dataset using our BMW-LabelTool-Lite and you can start the training right away and monitor it in many different ways like TensorBoard or a custom REST API and GUI. NoCode training with YOLOv4 and YOLOV3 has never been so easy.
Python
633
star
3

BMW-TensorFlow-Inference-API-GPU

This is a repository for an object detection inference API using the Tensorflow framework.
Python
314
star
4

BMW-Labeltool-Lite

This repository provides you with an easy-to-use labeling tool for State-of-the-art Deep Learning training purposes. It supports Auto-Labeling.
C#
303
star
5

BMW-YOLOv4-Inference-API-GPU

This is a repository for an nocode object detection inference API using the Yolov3 and Yolov4 Darknet framework.
Python
281
star
6

BMW-YOLOv4-Inference-API-CPU

This is a repository for an nocode object detection inference API using the Yolov4 and Yolov3 Opencv.
Python
220
star
7

BMW-Anonymization-API

This repository allows you to anonymize sensitive information in images/videos. The solution is fully compatible with the DL-based training/inference solutions that we already published/will publish for Object Detection and Semantic Segmentation.
Python
163
star
8

BMW-Classification-Training-GUI

This repository allows you to get started with training a State-of-the-art Deep Learning model with little to no configuration needed! You provide your labeled dataset and you can start the training right away. You can even test your model with our built-in Inference REST API. Training classification models with GluonCV has never been so easy.
Python
74
star
9

BMW-IntelOpenVINO-Detection-Inference-API

This is a repository for a No-Code object detection inference API using the OpenVINO. It's supported on both Windows and Linux Operating systems.
Python
70
star
10

SORDI-AI-Evaluation-GUI

This repository allows you to evaluate a trained computer vision model and get general information and evaluation metrics with little configuration.
Python
69
star
11

SORDI-Data-Pipeline-Reader

SORDI dataset has per frame annotation file in json format. Following tools create a COCO style annotation out of it. Thus the SORDI data can be easily fed into COCO style training pipelines.
Jupyter Notebook
68
star
12

BMW-Semantic-Segmentation-Inference-API-GPU-CPU

This is a repository for a Semantic Segmentation inference API using the Gluoncv CV toolkit
Python
58
star
13

BMW-Classification-Inference-GPU-CPU

This is a repository for an image classification inference API using the Gluoncv framework. The inference REST API works on CPU/GPU. It's supported on Windows and Linux Operating systems. Models trained using our Gluoncv Classification training repository can be deployed in this API. Several models can be loaded and used at the same time.
Python
51
star
14

BMW-Optical-Objects-Recognition-API

This is a repository for an optical objects recognition API.
Python
44
star
15

BMW-HemiStereo-API

This is a repository for an object detection inference API using the Hemistereo NX 180 X camera. It allows you to label an object based on the training of a model from a server. Also, it allows you to calculate the distance of the object from the camera, as well as its dimensions: depth, width and height.
Python
40
star
16

BMW-IntelOpenVINO-Segmentation-Inference-API

This is a repository for a semantic segmentation inference API using the OpenVINO toolkit
Python
34
star
17

BMW-Semantic-Segmentation-Training-GUI

BMW Semantic Segmentation Training GUI. This Repository enables you to perform training using GluonCv toolkit with little to no configuration.
Python
28
star