Vitis-AIâ„¢ TutorialsSee Vitisâ„¢ Development Environment on xilinx.comSee Vitis-AIâ„¢ Development Environment on xilinx.com |
Tutorial Name |
Latest Supported Vitis AI Version |
Description |
---|---|---|
Running ResNet18 CNN Through Vitis AI 3.0 Flow for ML | 3.0 | In this Deep Learning (DL) tutorial, you will take a public domain CNN like ResNet18, already trained on the ImageNet dataset, and run it through the Vitis AI 3.0 stack to run ML inference on FPGA devices. You will use Keras on Tensorflow 2.x. |
ResNet18 in PyTorch from Vitis AI Library | 3.0 | In this Deep Learning (DL) tutorial, you will take the ResNet18 CNN, from the Vitis AI 3.0 Library, and use it to classify the different colors of the "car object" inside images by running the inference application on FPGA devices. |
Deep Learning with Custom GoogleNet and ResNet in Keras and Xilinx Vitis AI | 3.0 | Quantize in fixed point some custom CNNs and deploy them on the Xilinx ZCU102 board, using Keras and the Xilinx7Vitis AI tool chain based on TensorFlow (TF). |
Partitioning Vitis AI SubGraphs on CPU/DPU | 3.0 | Learn how to deploy a CNN on the Xilinx VCK190 board using Vitis AI. |
FCN8 and UNET Semantic Segmentation with Keras and Xilinx Vitis AI | 3.0 | Train the FCN8 and UNET Convolutional Neural Networks (CNNs) for Semantic Segmentation in Keras adopting a small custom dataset, quantize the floating point weights files to an 8-bit fixed point representation, and then deploy them on the Xilinx ZCU102 board using Vitis AI. |
Pre- and Post-processing Accelerators for Semantic Segmentation with Unet CNN on MPSoC DPU | 3.0 | A complete example of how using the WAA flow targeting the MPSoC ZCU102 board. |
Using the Kaggle ImageNet Subset for Training Neural Networks | 2.5 | Demonstrates how to use the Kaggle ImageNet Subset for training neural networks for developers and enthusiasts with a non-edu domain who are unable to obtain the ImageNet dataset directly. |
RF Modulation Recognition with Vitis AI | 2.5 | Discusses using Deep Neural Networks to perform automatic modulation recognition so that the receiver may be able to detect and demodulate the signal without this explicit knowledge of the modulation type and encoding method. |
Leveraging the Vitis™ AI DPU in the Vivado® Workflow | 2.0 | Build the Vitis AI Targeted Reference Design (TRD) using the Vivado flow and learn how to build a PetaLinux image from the ZCU102 BSP that is provided in the TRD archive. |
Quantization and Pruning of AlexNet CNN trained in Caffe with Cats-vs-Dogs dataset | 2.0 | Train, prune, and quantize a modified version of the AlexNet convolutional neural network (CNN) with the Kaggle Dogs vs. Cats dataset in order to deploy it on the Xilinx® ZCU102 board. |
Vitis AI on VCK5000 Card | 2.0 | Start from card installation and go through a step-by-step workflow to run the first Vitis AI sample on a VCK5000 card. |
VCK190 Custom Lambda Operator | 2.0 | The general concept behind the custom operator flow is to make Vitis AI and the DPU more extensible—both for supporting custom layers as well as framework layers that are currently unsupported in the toolchain. The custom operator flow enables you to define layers which are unsupported, and ultimately deploy those layers either on the CPU or an accelerator. |
LIDAR + Camera Fusion on KV260 | 2.0 | Shows you how to install Ubuntu on the KV260 then build ROS, bring in multiple sensors, and deploy FPGA-accelerated neural network to process the data before displaying the data using RViz. All of this is possible without ever using FPGA tools! |
Introduction to Vitis AI | 1.4 | This tutorial puts in practice the concepts of FPGA acceleration of Machine Learning and illustrates how to quickly get started deploying both pre-optimized and customized ML models on Xilinx devices. |
MNIST Classification using Vitis AI and TensorFlow | 1.4 | Learn the Vitis AI TensorFlow design process for creating a compiled ELF file that is ready for deployment on the Xilinx DPU accelerator from a simple network model built using Python. This tutorial uses the MNIST test dataset. |
Using DenseNetX on the Xilinx DPU Accelerator | 1.4 | Learn about the Vitis AI TensorFlow design process and how to go from a Python description of the network model to running a compiled model on the Xilinx DPU accelerator. |
Using DenseNetX on the Xilinx Alveo U50 Accelerator Card | 1.3 | Implement a convolutional neural network (CNN) and run it on the DPUv3E accelerator IP. |
Vitis AI YOLOv4 | 1.4 | Learn how to train, evaluate, convert, quantize, compile, and deploy YOLOv4 on Xilinx devices using Vitis AI. |
TensorFlow2 and Vitis AI design flow | 1.4 | Learn about the TF2 flow for Vitis AI. In this tutorial, you'll be trained on TF2, including conversion of a dataset into TFRecords, optimization with a plug-in, and compiling and execution on a Xilinx ZCU102 board or Xilinx Alveo U50 Data Center Accelerator card. |
PyTorch flow for Vitis AI | 1.4 | Introduces the Vitis AI TensorFlow design process and illustrates how to go from a python description of the network model to running a compiled model on a Xilinx evaluation board. |
RF Modulation Recognition with TensorFlow 2 | 1.4 | Machine learning applications are certainly not limited to image processing! Learn how to apply machine learning with Vitis AI to the recognition of RF modulation from signal data. |
Denoising Variational Autoencoder with TensorFlow2 and Vitis-AI | 1.4 | The Xilinx DPU can accelerate the execution of many different types of operations and layers that are commonly found in convolutional neural networks but occasionally we need to execute models that have fully custom layers. One such layer is the sampling function of a convolutional variational autoencoder. The DPU can accelerate the convolutional encoder and decoder but not the statistical sampling layer - this must be executed in software on a CPU. This tutorial will use the variational autoencoder as an example of how to approach this situation. |
Alveo U250 TF2 Classification | 1.4 | Demonstrates image classification using the Alveo U250 card with Vitis AI 1.4 and the Tensorflow 2.x framework. |
Pre- and Post-processing PL Accelerators for ML with Versal DPU | 1.4 | A complete example of how using the WAA flow with Vitis 2020.2 targeting the VCK190 PP board. |
Caffe SSD | 1.4 | The topics covered in this tutorial include training, quantizing, and compiling SSD using PASCAL VOC 2007/2012 datasets, the Caffe framework, and Vitis AI tools. The model is then deployed on a Xilinx® ZCU102 target board and could also be deployed on other Xilinx development board targets (For example, Kria Starter Kit, ZCU104, and VCK190). |
ML Caffe Segmentation | 1.4 | Describes how to train, quantize, compile, and deploy various segmentation networks using Vitis AI, including ENet, ESPNet, FPN, UNet, and a reduced compute version of UNet that we'll call Unet-lite. The training dataset used for this tutorial is the Cityscapes dataset, and the Caffe framework is used for training the models. |
Introduction Tutorial to the Vitis AI Profiler | 1.4 | Introduces the the Vitis AI Profiler tool flow and will illustrates how to profile an example from the Vitis AI runtime (VART). |
PyTorch CityScapes Pruning | 1.4 | The following is a tutorial for using the Vitis AI Optimizer to prune the Vitis AI Model Zoo FPN Resnet18 segmentation model and a publicly available UNet model against a reduced class version of the Cityscapes dataset. The tutorial aims to provide a starting point and demonstration of the PyTorch pruning capabilities for the segmentation models. |
Fine-Tuning TensorFlow2 quantized model | 1.4 | Learn how to implement the Vitis-AI quantization fine-tuning for TensorFlow2.3. |
Vitis AI based Deployment Flow on VCK190 | 1.4 | DPU integration with VCK190 production platform. |
TensorFlow AI Optimizer Example Using Low-level Coding Style | 1.4 | Use AI Optimizer for TensorFlow to prune an AlexNet CNN by 80% while maintaining the original accuracy. |
Freezing a Keras Model for use with Vitis AI (UG1380) | 1.3 | Freeze a Keras model by generating a binary protobuf (.pb) file. |
Profiling a CNN Using DNNDK or VART with Vitis AI (UG1487) | 1.3 | Profile a CNN application running on the ZCU102 target board with Vitis AI. |
Moving Seamlessly between Edge and Cloud with Vitis AI (UG1488) | 1.3 | Compile and run the same identical design and application code on either the Alveo U50 data center accelerator card or the Zynq UltraScale+â„¢ MPSoC ZCU102 evaluation board. |
Copyright© 2022 Xilinx