• This repository has been archived on 01/Jul/2024
  • Stars
    star
    178
  • Rank 214,989 (Top 5 %)
  • Language
    C++
  • License
    Other
  • Created almost 4 years ago
  • Updated 5 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

OpenVINO™ integration with TensorFlow

English | 简体中文

OpenVINO™ integration with TensorFlow

This repository contains the source code of OpenVINO™ integration with TensorFlow, designed for TensorFlow* developers who want to get started with OpenVINO™ in their inferencing applications. TensorFlow* developers can now take advantage of OpenVINO™ toolkit optimizations with TensorFlow inference applications across a wide range of Intel® compute devices by adding just two lines of code.

import openvino_tensorflow
openvino_tensorflow.set_backend('<backend_name>')

This product delivers OpenVINO™ inline optimizations which enhance inferencing performance with minimal code modifications. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as:

  • Intel® CPUs
  • Intel® integrated and discrete GPUs

Note: Support for Intel Movidius™ MyriadX VPUs is no longer maintained. Consider previous releases for running on Myriad VPUs.

[Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend the developers to adopt native OpenVINO™ APIs and its runtime.]

New: OpenVINO™ TensorFlow FrontEnd can be used as an alternative to deploy your models whenever a model is fully supported by OpenVINO, and you can move from TensorFlow APIs to Native OpenVINO APIs.

Installation

Prerequisites

  • Ubuntu 18.04, 20.04, macOS 11.2.3 or Windows1 10 - 64 bit
  • Python* 3.7, 3.8 or 3.9
  • TensorFlow* v2.9.3

1Windows package supports only Python3.9

Check our Interactive Installation Table for a menu of installation options. The table will help you configure the installation process.

The OpenVINO™ integration with TensorFlow package comes with pre-built libraries of OpenVINO™ version 2022.3.0. The users do not have to install OpenVINO™ separately. This package supports:

  • Intel® CPUs

  • Intel® integrated and discrete GPUs

      pip3 install -U pip
      pip3 install tensorflow==2.9.3
      pip3 install openvino-tensorflow==2.3.0
    

For installation instructions on Windows please refer to OpenVINO™ integration with TensorFlow for Windows

To use Intel® integrated GPUs for inference, make sure to install the Intel® Graphics Compute Runtime for OpenCL™ drivers

For more details on installation please refer to INSTALL.md, and for build from source options please refer to BUILD.md

Configuration

Once you've installed OpenVINO™ integration with TensorFlow, you can use TensorFlow* to run inference using a trained model.

To see if OpenVINO™ integration with TensorFlow is properly installed, run

python3 -c "import tensorflow as tf; print('TensorFlow version: ',tf.__version__);\
            import openvino_tensorflow; print(openvino_tensorflow.__version__)"

This should produce an output like:

    TensorFlow version:  2.9.3
    OpenVINO integration with TensorFlow version: b'2.3.0'
    OpenVINO version used for this build: b'2022.3.0'
    TensorFlow version used for this build: v2.9.3

    CXX11_ABI flag used for this build: 1

By default, Intel® CPU is used to run inference. However, you can change the default option to Intel® integrated or discrete GPUs (GPU, GPU.0, GPU.1 etc). Invoke the following function to change the hardware on which inferencing is done.

openvino_tensorflow.set_backend('<backend_name>')

Supported backends include 'CPU', 'GPU', 'GPU_FP16'

To determine what processing units are available on your system for inference, use the following function:

openvino_tensorflow.list_backends()

For further performance improvements, it is advised to set the environment variable OPENVINO_TF_CONVERT_VARIABLES_TO_CONSTANTS=1. For more API calls and environment variables, see USAGE.md.

Examples

To see what you can do with OpenVINO™ integration with TensorFlow, explore the demos located in the examples directory.

Docker Support

Dockerfiles for Ubuntu* 18.04, Ubuntu* 20.04, and TensorFlow* Serving are provided which can be used to build runtime Docker* images for OpenVINO™ integration with TensorFlow on CPU, GPU. For more details see docker readme.

Prebuilt Images

Try it on Intel® DevCloud

Sample tutorials are also hosted on Intel® DevCloud. The demo applications are implemented using Jupyter Notebooks. You can interactively execute them on Intel® DevCloud nodes, compare the results of OpenVINO™ integration with TensorFlow, native TensorFlow and OpenVINO™.

License

OpenVINO™ integration with TensorFlow is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.

Support

Submit your questions, feature requests and bug reports via GitHub issues.

Troubleshooting

Some known issues and troubleshooting guide can be found here.

How to Contribute

We welcome community contributions to OpenVINO™ integration with TensorFlow. If you have an idea for improvement:

We will review your contribution as soon as possible. If any additional fixes or modifications are necessary, we will guide you and provide feedback. Before you make your contribution, make sure you can build OpenVINO™ integration with TensorFlow and run all the examples with your fix/patch. If you want to introduce a large feature, create test cases for your feature. Upon our verification of your pull request, we will merge it to the repository provided that the pull request has met the above mentioned requirements and proved acceptable.


* Other names and brands may be claimed as the property of others.

More Repositories

1

openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
C++
7,074
star
2

open_model_zoo

Pre-trained Deep Learning models and demos (high quality and extremely fast)
Python
4,086
star
3

anomalib

An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
Python
3,761
star
4

openvino_notebooks

📚 Jupyter notebook tutorials for OpenVINO™
Jupyter Notebook
2,372
star
5

training_extensions

Train, Evaluate, Optimize, Deploy Computer Vision Models via OpenVINO™
Python
1,139
star
6

nncf

Neural Network Compression Framework for enhanced OpenVINO™ inference
Python
925
star
7

model_server

A scalable inference server for models optimized with OpenVINO™
C++
660
star
8

datumaro

Dataset Management Framework, a Python library and a CLI tool to build, analyze and manage Computer Vision datasets.
Python
528
star
9

openvino.genai

Run Generative AI models using native OpenVINO C++ API
C++
120
star
10

openvino_contrib

Repository for OpenVINO's extra modules
C++
105
star
11

awesome-openvino

A curated list of OpenVINO based AI projects
99
star
12

geti-sdk

Software Development Kit (SDK) for the Intel® Geti™ platform for Computer Vision AI model training.
Python
73
star
13

docker_ci

The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.
Dockerfile
58
star
14

training_toolbox_caffe

Training Toolbox for Caffe
Jupyter Notebook
49
star
15

openvino_build_deploy

Pre-built components and code samples to help you build and deploy production-grade AI applications with the OpenVINO™ Toolkit from Intel
Jupyter Notebook
42
star
16

npu_plugin

OpenVINO NPU Plugin
MLIR
33
star
17

workbench

TypeScript
28
star
18

model_api

C++
25
star
19

openvino_xai

OpenVINO™ Explainable AI (XAI) Toolkit: Visual Explanation for OpenVINO Models
Python
24
star
20

openvino_tokenizers

OpenVINO Tokenizers extension
C++
22
star
21

model_preparation_algorithm

Model Preparation Algorithm: a Transfer Learning Framework
Python
21
star
22

security_addon

OpenVINO™ Security Add-on to control access to inferencing models.
C
16
star
23

operator

OpenVINO operator for OpenShift and Kubernetes
Go
13
star
24

model_analyzer

Model Analyzer is the Network Statistic Information tool
Python
11
star
25

workbench_aux

OpenVINO™ Toolkit - Deep Learning Workbench repository Auxuliary Assets
Python
10
star
26

mlas

Assembly
8
star
27

hyper_parameter_optimization

Python library of automatic hyper-parameter optimization
Python
6
star
28

openvino_docs

OpenVINO™ Toolkit documentation repository
Python
3
star
29

MLPerf

C++
2
star
30

telemetry

Python
1
star
31

npu_plugin_btc

C++
1
star
32

cpu_extensions

1
star
33

npu_plugin_elf

C++
1
star