Optimized Inference at the Edge with Intel® Tools and Technologies
This workshop will walk you through the workflow using Intel® Distribution of OpenVINO™ toolkit for inferencing deep learning algorithms that help accelerate vision, automatic speech recognition, natural language processing, recommendation systems and many other applications. You will learn how to optimize and improve performance with or without external accelerators and utilize tools to help you identify the best hardware configuration for your needs. This workshop will also outline the various frameworks and topologies supported by Intel® Distribution of OpenVINO™ toolkit.
⚠️ Labs of this workshop have been validated with Intel® Distribution of OpenVINO™ toolkit 2021.3 (openvino_toolkit_2021.3.394). Some of the videos shown below is based on OpenVINO 2021.2, might be slightly different from the slides, but the content is largely the same. FPGA plugin will no longer be supported by the OpenVINO stardard release, you can find the FPGA content from earlier branches.
Workshop Agenda
-
Intel® Distribution of OpenVINO™ toolkit Overview
- Training Slides - Part1, Part2
- Training Video Series - Intel® Distribution of OpenVINO™ Toolkit Training
- Lab Setup - Lab Setup Instructions
⚠️ Please make sure you have gone through all the steps in the Lab Setup, all the Labs below are based on the assumption that user has correctly installed OpenVINO toolkit on the local development system. -
Model Optimizer
-
Inference Engine
-
Accelerators based on Intel® Movidius™ Vision Processing Unit
-
Multiple Models in One Application
-
Deep Learning Workbench
-
Deep Learning Streamer
-
Intel® DevCloud for the Edge
Further Reading Materials
- Support for Microsoft ONNX runtime in OpenVINO
- Slides - ONNX runtime and OpenVINO
Disclaimer
Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others