• Stars
    star
    120
  • Rank 295,983 (Top 6 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created almost 6 years ago
  • Updated almost 5 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[5 FPS - 150 FPS] Learning Deep Features for One-Class Classification (AnomalyDetection). Corresponds RaspberryPi3. Convert to Tensorflow, ONNX, Caffe, PyTorch. Implementation by Python + OpenVINO/Tensorflow Lite.

Keras-OneClassAnomalyDetection

Learning Deep Features for One-Class Classification (AnomalyDetection).
Corresponds RaspberryPi3.
Convert to Tensorflow, ONNX, Caffe, PyTorch, Tensorflow Lite.

[Jan 19, 2019] First Release. It corresponds to RaspberryPi3.
[Jan 20, 2019] I have started work to make it compatible with OpenVINO.
[Feb 15, 2019] Support for OpenVINO. [x86_64 only. CPU/GPU(Intel HD Graphics)]
[Feb 24, 2019] Support for Tensorflow Lite. [for RaspberryPi3]

Introduction

This repository was inspired by Image abnormality detection using deep learning ーPapers and implementationー - Qiita - shinmura0, Image inspection machine for people trying hard - Qiita - shinmura0 and was created.
I would like to express my deepest gratitude for having pleasantly accepted his skill, consideration and article quotation.
His articles that were supposed to be used practically, not limited to logic alone, are wonderful.
However, I don't have the skills to read papers, nor do I have skills to read mathematical expressions.
I only want to verify the effectiveness of his wonderful article content in a practical range.
To be honest, I am not engaged in the work of making a program.

Environment (example)

  1. Ubuntu 16.04 (GPU = Geforce GTX 1070)
  2. CUDA 9.0 + cuDNN 7.2
  3. LattePanda Alpha (GPU = Intel HD Graphics 615)
  4. RaspberryPi3 (CPU = Coretex-A53)
  5. Python 3.5
  6. Tensorflow-gpu 1.12.0 (pip install) or Tensorflow 1.11.0 (self-build wheel) or Tensorflow Lite 1.11.0 (self-build wheel)
  7. Keras 2.2.4
  8. PyTorch 1.0.0
  9. torchvision 0.2.1
  10. Caffe
  11. numpy 1.15.3
  12. matplotlib 3.0.1
  13. PIL 5.3.0
  14. OpenCV 4.0.1-openvino
  15. sklearn 0.20.0
  16. OpenVINO R5 2018.5.445

Translating shinmura0's article

0. Table of contents

  1. Introduction
  2. Overview
  3. Preparing data
  4. Preparing the model
  5. Learning phase
  6. Test phase
  7. Implementation by Keras
    7-1. Load data
    7-2. Data resizing
    7-3. Model building and learning
  8. Result
    8-1. Look at the distribution
    8-2. Abnormality detection performance
    8-3. Relationship between images and abnormal scores
  9. Visualization by Keras
    9-1. Grad-CAM
    9-2. Results
  10. Implementation by RaspberryPi
    10-1. Environment
    10-2. How to use
    10-3. Learning by original data set
    10-4. Acquisition of learning image data
    10-5. Resizing the image for learning
    10-6. Data Augmentation
    10-7. Generation of reference data
    10-8. Training
    10-9. Execution of inference with RaspberryPi
    10-10. Results
  11. Acceleration of LOF
  12. Structure of the model
  13. Model Convert
    13-1. MMdnn
    13-2. Keras -> Tensorflow -> OpenVINO
    13-3. Keras -> ONNX -> OpenVINO
    13-4. Keras -> Caffe -> OpenVINO
    13-5. Keras -> PyTorch
    13-6. Keras -> Tensorflow -> Tensorflow Lite
  14. Issue

1. Introduction

There are many methods such as methods using "Implemented ALOCC for detecting anomalies by deep learning (GAN) - Qiia - kzkadc" and methods using "Detection of Video Anomalies Using Convolutional Autoencoders and One-Class Support Vector Machines (AutoEncoder)" for image anomaly detection using deep learning.
Here is an article on detecting abnormality of images using "Variational Autoencoder".
Image abnormality detection using Variational Autoencoder (Variational Autoencoder) - Qiita - shinmura0

The method to be introduced this time is to detect abnormality by devising the loss function using normal convolution neural network(CNN).

「Learning Deep Features for One-Class Classification」 (Subsequent abbreviations, DOC)
arxiv: https://arxiv.org/abs/1801.05365

01

In conclusion, it was found that this method has good anomaly detection accuracy and visualization of abnormal spots is also possible.

2. Overview

This paper states that it achieved state-of-the-art at the time of publication.
In the figure below, we learned under various conditions using normal CNN and visualized the output from the convolution layer with t-SNE.

02

  • Figure (b): Alexnet's learned model with Normal and Abnormal distributed
  • Figure (c): Diagram learned with Normal vs Abnormal
  • Figure (e): Proposed method (DOC)

I think that it is not only me that thinking that abnormality can be detected also in figure (b).
However, it is somewhat inferior to figure (e).

In the thesis, it finally uses "k neighborhood method" in (e) to detect abnormality.
As a learning method, view the images that you want to detect abnormality at the same time, completely different kinds of images, and narrow down the range of the images for which you want to detect anomalies.

3. Preparing data

For learning, prepare the following data.

Dataset name Contents Concrete example Number of classes
Target data Image you want to detect abnormality Product etc. 1
Reference data A data set not related to the above ImageNet and CIFAR-10 10 or 1,000 or more

4. Preparing the model

03

  • The deep learning model g prepares a learned model.
  • In the paper, g uses Alexnet and VGG16. h is 1,000 nodes for ImageNet, 10 nodes for CIFAR-10.
  • During learning, g and h of Reference Network (R) and Secondary Network (S) are shared.
  • Also, during learning, the weights are fixed except for the last four layers.

5. Learning phase

  • First, using the reference data, let R compute the loss function .
  • Next, using the target data, let S calculate the loss function .
  • Finally, let 's learn R and S at the same time by and .

Total Loss is defined by the following formula.

uses the cross entropy used in normal classification problems.
Also in the paper .

The most important compact loss is calculated as follows.
Let be the batch size and let be the output (k dimension) from g. Then define as follows.

is the average value of the output except in the batch. At this time, is defined as follows.


<Annotation>
As an image, (Strictly speaking it is different) can be regarded as the variance of the output within the batch.
When assembling code, it is troublesome to write "average value other than ", I used the following formula in the appendix of the paper.

However, is the average value of the output within the batch.


And at the time of learning, I will let you learn so that , which is the variance of the output, also decreases with cross entropy .
The learning rate seems to be , and the weight decay is set to 0.00005.

6. Test phase

04

  • Remove h from the model.
  • First, bring in the image from the learning data of the target data, put it in g, and obtain the distribution.
  • Next, put the image you want to test in g and get the distribution.
  • Finally, abnormality detection is performed by using the k-nearest neighbor method in "Distribution of image of training data" and "Distribution of test image".

7. Implementation by Keras

The learned model uses lightweight MobileNetV2.

7-1. Load data

Data use this time is Fashion-MNIST.
And I distributed the data as follows.

Number
of
data
Number
of
classes
Remarks
Reference data 6,000 8 Excluding sneakers and boots
Target data 6,000 1 sneakers
Test data(Normal) 1,000 1 sneakers
Test data(Abnormal) 1,000 1 boots

Logic 1 : data_loader.py

7-2. Data resizing

In MobileNetv2, the minimum input size is .
Therefore, Fashion-MNIST can not be used as it is.
So I will resize the data.

Logic 2 : data_resizer.py

The figure is as follows.
05
The left figure is original data , the right figure is after resizing .

7-3. Model building and learning

During learning, the weight is fixed for the second half of the convolution layer.
I will explain a part of the code here.
Using Keras, building a model was easy, but building the following loss function was extremely difficult.

def original_loss(y_true, y_pred):
    lc = 1/(classes*batchsize) * batchsize**2 * K.sum((y_pred -K.mean(y_pred,axis=0))**2,axis=[1]) / ((batchsize-1)**2)
    return lc

And the part to be careful is the following part.

#target data
#Get loss while learning
lc.append(model_t.train_on_batch(batch_target, np.zeros((batchsize, feature_out))))
            
#reference data
#Get loss while learning
ld.append(model_r.train_on_batch(batch_ref, batch_y))

model_t.train_on_batch gives a dummy zero matrix because any teacher data can be used.
np.zeros((batchsize, feature_out))

In addition, because it was very difficult to use Keras to simultaneously learn and , I tried a method to let the machine learn after learning with .
Loss functions and simultaneous learning may be easily done with Pytorch.

Logic 3 : train.py

8. Result

8-1. Look at the distribution

Before looking at the performance of anomaly detection, visualize the distribution with t-sne.
The figure below shows the image of the test data as it is visualized.
06
Even if I use input data as it is, sneakers and boots are separated considerably.
However, some seem to be mixed.

Next, the figure below visualizes the test data output (1280 dimension) with t-sne using CNN (MobileNetV2) learned with DOC.
07
It is well separated as in the previous figure.
What I want to emphasize here, CNN is that it only learns the image of sneakers (normal items).
Nonetheless, it is surprising that sneakers and boots are well separated. It is just abnormality detection.
Thanks to DOC learning about metastasis, it succeeded because CNN had learned the place to see the image beforehand.

I will post the transition of loss function during learning.
08
09

8-2. Abnormality detection performance

Next let's detect abnormality with g output. In the paper the k-neighbor method was used, but this implementation uses LOF.

Logic 4 : calc_ROC.py

The ROC curve is as follows.
10

AUC has surprised value of 0.90.
By the way, the overall accuracy is about 83%.
Compared with previous results, it is as follows.

*VAE = Variational Autoencoder
*Measurement speed is measured by Google Colaboratory's GPU
*Visualization of DOC is explained in the next section

Performance
(AUC)
Inference speed
(millisec/1 image)
Visualization
accuracy
VAE(Small window) 0.58 0.80 ×
VAE+Irregularization(Small window) 0.67 4.3
DOC(MobileNetV2) 0.90 140

DOC was a victory over VAE in performance, but at decision speed it is slow to use LOF.
By the way, it was 370 millisec / 1 image when it was DOC + VGG16.
MobileNetV2 is fast.
Also, inferior accuracy "VAE + irregularity" was invented for complex images like screw threads.
So, for complex images, the performance may be "VAE + irregularity > DOC".

8-3. Relationship between images and abnormal scores

Next, let's look at the relationship between boots (abnormal items) images and abnormal scores.
The larger the abnormality score, the more likely it is that it is different from sneakers (normal items).

First of all, it is the image of the boots where the anomaly score was large, that is judged not to resemble sneakers altogether.
11
Sure, it does not look like a sneaker at all.

Next, it is an image of an image with a small abnormality score, that is, boots judged to be very similar to sneakers.
12
It looks like a high-cut sneaker overall.
Even if humans make this judgment, they may erroneously judge.
Intuitively, the greater the abnormality score due to DOC, the more likely it is that it deviates from normal products.

9. Visualization by Keras

I also tried visualization with Grad-CAM.
It is also important to visualize where abnormality is.

9-1. Grad-CAM

Grad-CAM is often used for CNN classification problems.
When used in a classification problem, it shows the part that became the basis of that classification.
For details, please see the following article.
Visualization of deep learning gaze region - Qiita - bele_m
With Keras, Grad-CAM, a model I made by myself - Qiita - haru1977

This time, I tried using Grad-CAM directly in DOC.

9-2. Results

First of all, I tried using Grad-CAM in boots images where the abnormal score was large.
13
The vertical part of the heel part and the boots is red, and parts which are not totally similar to the sneakers are visualized.
Next, I will try using Grad-CAM on boots with small anomaly scores.
14
The high cut part is red, and parts that are not similar to sneakers are visualized.
On the whole, whether visualization succeeds or not is like feeling like 50%, there seems to be room for improvement, such as adding a Fully Connected Layer.
The problem is that the processing time takes about 5 seconds / image (Colaboratory-GPU) time and it can not be used in real time.
It seems necessary to select the scene to use.

10. Implementation by RaspberryPi

  • Cost is $100 or less (conventional products have over $9,000)
  • Absolute detection accuracy is the highest peak (state-of-the-art at the time of publication)
  • Compact (RaspberryPi and Web Camera only)
  • Despite using deep learning at RaspberryPI it is fast (5 FPS)
  • Application areas
    • Visual inspection of industrial products
    • Appearance inspection of bridges by drone here
    • Surveillance camera here

001

10-1. Environment

  1. Numpy 1.15.4
  2. scikit-learn 0.19.2+
  3. Keras 2.2.4
  4. OpenCV 4.0.1-openvino

10-2. How to use

If you want to try it immediately, please try Face Detection.

1.Execute below.

$ sudo apt-get install -y python-pip python3-pip python3-scipy libhdf5-dev libatlas-base-dev
$ sudo -H pip3 install protobuf==3.6.0
$ sudo pip3 uninstall tensorflow
$ wget -O tensorflow-1.11.0-cp35-cp35m-linux_armv7l.whl https://github.com/PINTO0309/Tensorflow-bin/raw/master/tensorflow-1.11.0-cp35-cp35m-linux_armv7l_jemalloc.whl
$ sudo pip3 install tensorflow-1.11.0-cp35-cp35m-linux_armv7l.whl
$ rm tensorflow-1.11.0-cp35-cp35m-linux_armv7l.whl
$ sudo -H pip3 install scikit-learn==0.20.2
$ git clone --recursive https://github.com/PINTO0309/Keras-OneClassAnomalyDetection.git
$ cd Keras-OneClassAnomalyDetection
$ git submodule init
$ git submodule update

2.Connect USB Camera to RaspberryPi.
3.Execute below.

$ cd OneClassAnomalyDetection-RaspberryPi3/DOC
$ python3 main.py

4.When the real-time image of USB camera is drawn, please press s key.
5.When the score is displayed, an abnormality test is started.

[Note]

  • Since abnormality test runs RaspberryPi fully, it will freeze with thermal runaway after about 5 minutes.
  • When operating for a long time, please operate while cooling.
  • If "human face" appears, the score falls. (Normal)
  • If "human face" is not appeared, the score will rise. (Abnormality)
  • Learning was executed with CelebA.

<Test of Corei7 CPU Only 320x240 / 11 FPS>
15
Youtube : https://youtu.be/p8BDwhF7y8w

<Test of Core m3 CPU Only + OpenVINO + 320x240 / 180 FPS>
31

<Test of Core m3 CPU Only + OpenVINO + 640x480/ 70 FPS>
32

<Test of Intel HD Graphics 615 + OpenVINO + 320x240 / 130 FPS>
33

<Test of Intel HD Graphics 615 + OpenVINO + 640x480 / 70 FPS>
34

10-3.Learning by original data set

For those who want to training models themselves, the technical contents are described below.
The overall flow is as shown in the figure below.
16
Since the calculation load is not high, it is possible to complete all processes with RaspberryPi.
However, in order to download CIFAR10, it needs to be processed by a terminal connected to the network.

10-4.Acquisition of learning image data

First, you take pictures for learning.
Please take a picture along the following notes.

  1. Delete the model folder from the downloaded DOC folder (booting will be faster).
  2. Connect USB camera and execute DOC/main.py.
  3. When the image of the USB Camera is displayed, press the p key to take a picture.
  4. By shooting about 10 images, sufficient accuracy can be obtained.

10-5.Resizing the image for learning

After taking a picture, upload the DOC/pictures folder to Google Drive.
And from here you will process it on Google Colaboratory.
Please refer to The story that it was easy to mount on Google Drive at Colaboratory - Qiita - k_uekado on how to mount Google Drive on Google Colaboratory.

In order to let you learn on MobileNetV2, resize the image with the following code.

import cv2
import numpy as np
import os
from PIL import Image
from keras.preprocessing import image
from keras.preprocessing.image import array_to_img

img_path = 'pictures/'
NO = 1

def resize(x):
    x_out = []

    for i in range(len(x)):
        img = cv2.resize(x[i],dsize=(96,96))
        x_out.append(img)

    return np.array(x_out)

x = []

while True:
    if not os.path.exists(img_path + str(NO) + ".jpg"):
        break
    img = Image.open(img_path + str(NO) + ".jpg")
    img = image.img_to_array(img)
    x.append(img)
    NO += 1

x_train = resize(x)

Before resizing on the left figure.
After resizing the figure on the right.
17

10-6.Data Augmentation

The number of various data is as follows.

Contents Number Number
of
Classes
Note
Reference data 6,000 10 CIFAR10
Target data 6,000 1 Image of nut

Infrate the captured image (target data) with Data Augmentation.

from keras.preprocessing.image import ImageDataGenerator

X_train = []
aug_num = 6000 # Number of DataAug
NO = 1

datagen = ImageDataGenerator(
           rotation_range=10,
           width_shift_range=0.2,
           height_shift_range=0.2,
           fill_mode="constant",
           cval=180,
           horizontal_flip=True,
           vertical_flip=True)

for d in datagen.flow(x_train, batch_size=1):
    X_train.append(d[0])
    # Because datagen.flow loops infinitely,
    # it gets out of the loop if it gets the required number of sheets.
    if (NO % aug_num) == 0:
        print("finish")
        break
    NO += 1

X_train = np.array(X_train)
X_train /= 255

Data Augmentation as follows.
18
Points are as follows.
When moving the image in parallel, I specified the color to fill in the blanks.
It is necessary to adjust this as necessary.

fill_mode="constant",
cval=180,

This time I want to detect color differences, so I am using regular Data Augmentation.
However, if you want to detect only the shape of the object, it may be better to run PCA Data Augmentation.

10-7.Generation of reference data

In this case I will use color image, use CIFAR10 image as reference data.

from keras.datasets import cifar10
from keras.utils import to_categorical

# dataset
(x_ref, y_ref), (x_test, y_test) = cifar10.load_data()
x_ref = x_ref.astype('float32') / 255

#6,000 randomly extracted from ref data
number = np.random.choice(np.arange(0,x_ref.shape[0]),6000,replace=False)

x, y = [], []

x_ref_shape = x_ref.shape

for i in number:
    temp = x_ref[i]
    x.append(temp.reshape((x_ref_shape[1:])))
    y.append(y_ref[i])

x_ref = np.array(x)
y_ref = to_categorical(y)

X_ref = resize(x_ref)

It will be resized as follows.
Before resizing on the left figure.
After resizing the figure on the right.
19

10-8.Training

Train according to Model building and learning.
Save the model with the code below.
A "model" folder is created under the execution path.

train_num = 1000# number of training data

model_path = "model/" 
if not os.path.exists(model_path):
    os.mkdir(model_path)

train = model.predict(X_train)

# model save
model_json = model.to_json()
open(model_path + 'model.json', 'w').write(model_json)
model.save_weights(model_path + 'weights.h5')
np.savetxt(model_path + "train.csv",train[:train_num],delimiter=",")

The "model" folder contains three files "model.json", "weights.h5" and "train.csv".
train_num = 1000 # number of training data is an acceleration parameter of LOF.

10-9.Execution of inference with RaspberryPi

  1. copy the "model" folder right under the "DOC" folder of RaspberryPi.
  2. Execute main.py.
$ cd OneClassAnomalyDetection-RaspberryPi3/DOC
$ python3 main.py

The threshold defined in main.py is the threshold.
Beyond this the score will be in the red.
Anomaly score is moving average of 10 inference.

10-10.Results

The result of verification with nut image is shown.
Pictures of normal products are as follows.
20

Normal product A (score 1.3) → Abnormal product (rust, score 1.6)
21

Normal product A (score 1.4) → Abnormal item (size difference, score 2 or more)
22

Move the normal item A (scores of differences in position are hardly seen)
23

Normal product B (score 1.2) → Normal product A (score 1.3)
24

11. Acceleration of LOF

I will write about speedup of LOF.
Unlike neural networks, LOF changes the inference time greatly depending on the number of learning data.
The figure below shows the relationship between the number of LOF learning data and inference time, using sneaker data.
25
Since LOF stores learning data and detects abnormality just like the k-nearest neighbor method, as the number of learning data increases, inference time increases accidentally.
What I would like to pay attention to here is that the transition of AUC is saturated when the learning data is 1000 pieces.
In other words, even if there are more than 1000 training data, performance will not be affected, only inference time will increase.
So, I am constructing LOF using 1000 learning data this time.
As a result, performance and reasoning speed can be incorporated in a well-balanced manner, reducing the time by 180 msec or more.
As a result, the inference time of DOC + LOF is about 200 msec (5 FPS) with RaspberryPi alone.
If you use the GPU, it may run at 20 msec (50 FPS).

12. Structure of the model

  1. Execute below.
$ sudo -H pip3 install netron
$ netron -b [MODEL_FILE]
  1. Access http://localhost:8080 from the browser.

13. Model Convert

13-1. MMdnn

$ sudo -H pip3 install -U git+https://github.com/Microsoft/MMdnn.git@master
$ sudo -H pip3 install onnx-tf
$ mmconvert -h
usage: mmconvert [-h]
                 [--srcFramework {caffe,caffe2,cntk,mxnet,keras,tensorflow,tf,pytorch}]
                 [--inputWeight INPUTWEIGHT] [--inputNetwork INPUTNETWORK]
                 --dstFramework
                 {caffe,caffe2,cntk,mxnet,keras,tensorflow,coreml,pytorch,onnx}
                 --outputModel OUTPUTMODEL [--dump_tag {SERVING,TRAINING}]

optional arguments:
  -h, --help            show this help message and exit
  --srcFramework {caffe,caffe2,cntk,mxnet,keras,tensorflow,tf,pytorch}, -sf {caffe,caffe2,cntk,mxnet,keras,tensorflow,tf,pytorch}
                        Source toolkit name of the model to be converted.
  --inputWeight INPUTWEIGHT, -iw INPUTWEIGHT
                        Path to the model weights file of the external tool
                        (e.g caffe weights proto binary, keras h5 binary
  --inputNetwork INPUTNETWORK, -in INPUTNETWORK
                        Path to the model network file of the external tool
                        (e.g caffe prototxt, keras json
  --dstFramework {caffe,caffe2,cntk,mxnet,keras,tensorflow,coreml,pytorch,onnx}, -df {caffe,caffe2,cntk,mxnet,keras,tensorflow,coreml,pytorch,onnx}
                        Format of model at srcModelPath (default is to auto-
                        detect).
  --outputModel OUTPUTMODEL, -om OUTPUTMODEL
                        Path to save the destination model
  --dump_tag {SERVING,TRAINING}
                        Tensorflow model dump type

13-2. Keras -> Tensorflow -> OpenVINO

$ python3 keras2tensorflow/keras_to_tensorflow.py \
--input_model="OneClassAnomalyDetection-RaspberryPi3/DOC/model/weights.h5" \
--input_model_json="OneClassAnomalyDetection-RaspberryPi3/DOC/model/model.json" \
--output_model="models/tensorflow/weights.pb"
$ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \
--input_model models/tensorflow/weights.pb \
--output_dir irmodels/tensorflow/FP16 \
--input input_1 \
--output global_average_pooling2d_1/Mean \
--data_type FP16 \
--batch 1
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/models/tensorflow/weights.pb
	- Path for generated IR: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/tensorflow/FP16
	- IR output name: 	weights
	- Log level: 	ERROR
	- Batch: 	1
	- Input layers: 	input_1
	- Output layers: 	global_average_pooling2d_1/Mean
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	None
Model Optimizer version: 	1.5.12.49d067a0
/usr/local/lib/python3.5/dist-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/tensorflow/FP16/weights.xml
[ SUCCESS ] BIN file: /home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/tensorflow/FP16/weights.bin
[ SUCCESS ] Total execution time: 5.31 seconds. 
$ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \
--input_model models/tensorflow/weights.pb \
--output_dir irmodels/tensorflow/FP32 \
--input input_1 \
--output global_average_pooling2d_1/Mean \
--data_type FP32 \
--batch 1
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/models/tensorflow/weights.pb
	- Path for generated IR: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/tensorflow/FP32
	- IR output name: 	weights
	- Log level: 	ERROR
	- Batch: 	1
	- Input layers: 	input_1
	- Output layers: 	global_average_pooling2d_1/Mean
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	None
Model Optimizer version: 	1.5.12.49d067a0
/usr/local/lib/python3.5/dist-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/tensorflow/FP32/weights.xml
[ SUCCESS ] BIN file: /home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/tensorflow/FP32/weights.bin
[ SUCCESS ] Total execution time: 5.59 seconds. 

13-3. Keras -> ONNX -> OpenVINO

$ mmconvert \
-sf keras \
-iw OneClassAnomalyDetection-RaspberryPi3/DOC/model/weights.h5 \
-in OneClassAnomalyDetection-RaspberryPi3/DOC/model/model.json \
-df onnx \
-om models/onnx/weights.onnx
$ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo.py \
--framework onnx \
--input_model models/onnx/weights.onnx \
--output_dir irmodels/onnx/FP16 \
--data_type FP16 \
--batch 1
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/models/onnx/weights.onnx
	- Path for generated IR: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/onnx/FP16
	- IR output name: 	weights
	- Log level: 	ERROR
	- Batch: 	1
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
ONNX specific parameters:
Model Optimizer version: 	1.5.12.49d067a0
[ ERROR ]  Cannot infer shapes or values for node "Conv1_relu".
[ ERROR ]  There is no registered "infer" function for node "Conv1_relu" with op = "Clip". Please implement this function in the extensions. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #37. 
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <UNKNOWN>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Stopped shape/value propagation at "Conv1_relu" node. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38. 

13-4. Keras -> Caffe -> OpenVINO

$ mmconvert \
-sf keras \
-iw OneClassAnomalyDetection-RaspberryPi3/DOC/model/weights.h5 \
-in OneClassAnomalyDetection-RaspberryPi3/DOC/model/model.json \
-df caffe \
-om models/caffe/weights
$ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo.py \
--framework caffe \
--input_model models/caffe/weights.caffemodel \
--input_proto models/caffe/weights.prototxt \
--output_dir irmodels/caffe/FP16 \
--data_type FP16 \
--batch 1
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/models/caffe/weights.caffemodel
	- Path for generated IR: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/caffe/FP16
	- IR output name: 	weights
	- Log level: 	ERROR
	- Batch: 	1
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
Caffe specific parameters:
	- Enable resnet optimization: 	True
	- Path to the Input prototxt: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/models/caffe/weights.prototxt
	- Path to CustomLayersMapping.xml: 	Default
	- Path to a mean file: 	Not specified
	- Offsets for a mean file: 	Not specified
Model Optimizer version: 	1.5.12.49d067a0
[ ERROR ]  Unexpected exception happened during extracting attributes for node block_14_depthwise_BN.
Original exception message: Found custom layer "DummyData2". Model Optimizer does not support this layer. Please, implement extension. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #45.

13-5. Keras -> PyTorch

$ mmconvert \
-sf keras \
-iw OneClassAnomalyDetection-RaspberryPi3/DOC/model/weights.h5 \
-in OneClassAnomalyDetection-RaspberryPi3/DOC/model/model.json \
-df pytorch \
-om models/pytorch/weights.pth

13-6. Keras -> Tensorflow -> Tensorflow Lite

$ python3 keras2tensorflow/keras_to_tensorflow.py \
--input_model="OneClassAnomalyDetection-RaspberryPi3/DOC/model/weights.h5" \
--input_model_json="OneClassAnomalyDetection-RaspberryPi3/DOC/model/model.json" \
--output_model="models/tensorflow/weights.pb"
$ sudo apt-get install -y libhdf5-dev libc-ares-dev libeigen3-dev
$ sudo pip3 install keras_applications==1.0.7 --no-deps
$ sudo pip3 install keras_preprocessing==1.0.9 --no-deps
$ sudo pip3 install h5py==2.9.0
$ sudo apt-get install -y openmpi-bin libopenmpi-dev
$ sudo pip3 uninstall tensorflow
$ wget -O tensorflow-1.11.0-cp35-cp35m-linux_armv7l.whl https://github.com/PINTO0309/Tensorflow-bin/raw/master/tensorflow-1.11.0-cp35-cp35m-linux_armv7l_jemalloc_multithread.whl
$ sudo pip3 install tensorflow-1.11.0-cp35-cp35m-linux_armv7l.whl
$ cd ~
$ wget https://github.com/PINTO0309/Bazel_bin/blob/master/0.17.2/Raspbian_armhf/install.sh
$ sudo chmod +x install.sh
$ ./install.sh
$ git clone -b v1.11.0 https://github.com/tensorflow/tensorflow.git
$ cd tensorflow
$ git checkout v1.11.0
$ ./tensorflow/contrib/lite/tools/make/download_dependencies.sh
$ sudo bazel build tensorflow/contrib/lite/toco:toco
$ cd ~/tensorflow
$ mkdir output
$ cp ~/Keras-OneClassAnomalyDetection/models/tensorflow/weights.pb . #<-- "." required
$ sudo bazel-bin/tensorflow/contrib/lite/toco/toco \
--input_file=weights.pb  \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--output_file=output/weights.tflite \
--input_shapes=1,96,96,3 \
--inference_type=FLOAT \
--input_type=FLOAT \
--input_arrays=input_1 \
--output_arrays=global_average_pooling2d_1/Mean \
--post_training_quantize

14. Issue

https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer#FAQ76
https://software.intel.com/en-us/forums/computer-vision/topic/802689

OpenVINO Official FAQ

More Repositories

1

PINTO_model_zoo

A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.
Python
3,504
star
2

onnx2tf

Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massive Transpose extrapolation problem in onnx-tensorflow (onnx-tf). I don't need a Star, but give me a pull request.
Python
669
star
3

OpenVINO-YoloV3

YoloV3/tiny-YoloV3+RaspberryPi3/Ubuntu LaptopPC+NCS/NCS2+USB Camera+Python+OpenVINO
Python
535
star
4

Tensorflow-bin

Prebuilt binary with Tensorflow Lite enabled. For RaspberryPi / Jetson Nano. Support for custom operations in MediaPipe. XNNPACK, XNNPACK Multi-Threads, FlexDelegate.
Shell
478
star
5

MobileNet-SSD-RealSense

[High Performance / MAX 30 FPS] RaspberryPi3(RaspberryPi/Raspbian Stretch) or Ubuntu + Multi Neural Compute Stick(NCS/NCS2) + RealSense D435(or USB Camera or PiCamera) + MobileNet-SSD(MobileNetSSD) + Background Multi-transparent(Simple multi-class segmentation) + FaceDetection + MultiGraph + MultiProcessing + MultiClustering
Python
355
star
6

openvino2tensorflow

This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and pb. PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW). And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and saved_model to .tflite and saved_model to onnx. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support.
Python
318
star
7

simple-onnx-processing-tools

A set of simple tools for splitting, merging, OP deletion, size compression, rewriting attributes and constants, OP generation, change opset, change to the specified input order, addition of OP, RGB to BGR conversion, change batch size, batch rename of OP, and JSON convertion for ONNX models.
Python
266
star
8

tflite2tensorflow

Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support. Supports inverse quantization of INT8 quantization model.
Python
239
star
9

TensorflowLite-bin

Prebuilt binary for TensorFlowLite's standalone installer. For RaspberryPi. A very lightweight installer. I provide a FlexDelegate, MediaPipe Custom OP and XNNPACK enabled binary.
Shell
181
star
10

mediapipe-bin

MediaPipe Python Wheel installer for RaspberryPi OS aarch64, Ubuntu aarch64, Debian aarch64 and Jetson Nano.
Shell
113
star
11

MobileNetV2-PoseEstimation

Tensorflow based Fast Pose estimation. OpenVINO, Tensorflow Lite, NCS, NCS2 + Python.
Python
105
star
12

MobileNet-SSD

MobileNet-SSD(MobileNetSSD) + Neural Compute Stick(NCS) Faster than YoloV2 + Explosion speed by RaspberryPi · Multiple moving object detection with high accuracy.
Python
92
star
13

wsl2_linux_kernel_usbcam_enable_conf

Configuration file to build the kernel to access the USB camera connected to the host PC using USBIP from inside the WSL2 Ubuntu 20.04/22.04.
Python
88
star
14

TPU-MobilenetSSD

Edge TPU Accelerator / Multi-TPU + MobileNet-SSD v2 + Python + Async + LattePandaAlpha/RaspberryPi3/LaptopPC
Python
81
star
15

whisper-onnx-cpu

ONNX implementation of Whisper. PyTorch free.
Python
79
star
16

TensorflowLite-UNet

Implementation of UNet by Tensorflow Lite. Semantic segmentation without using GPU with RaspberryPi + Python. In order to maximize the learning efficiency of the model, this learns only the "Person" class of VOC2012. And Comparison with ENet.
Python
78
star
17

HeadPoseEstimation-WHENet-yolov4-onnx-openvino

WHENet - ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L
Python
66
star
18

DMHead

Dual model head pose estimation. Fusion of SOTA models. 360° 6D HeadPose detection. All pre-processing and post-processing are fused together, allowing end-to-end processing in a single inference.
Python
65
star
19

OpenVINO-EmotionRecognition

OpenVINO+NCS2/NCS+MutiModel(FaceDetection, EmotionRecognition)+MultiStick+MultiProcess+MultiThread+USB Camera/PiCamera. RaspberryPi 3 compatible. Async.
Python
56
star
20

whisper-onnx-tensorrt

ONNX and TensorRT implementation of Whisper
Python
55
star
21

scs4onnx

A very simple tool that compresses the overall size of the ONNX model by aggregating duplicate constant values as much as possible.
Python
51
star
22

MobileNet-SSDLite-RealSense-TF

RaspberryPi3(Raspbian Stretch) + MobileNetv2-SSDLite(Tensorflow/MobileNetv2SSDLite) + RealSense D435 + Tensorflow1.11.0 + without Neural Compute Stick(NCS)
Python
51
star
23

OpenVINO-DeeplabV3

[4-5 FPS / Core m3 CPU only] [11 FPS / Core i7 CPU only] OpenVINO+DeeplabV3+LattePandaAlpha/LaptopPC. CPU / GPU / NCS. RealTime semantic-segmentaion. Python3.5+Tensorflow v1.11.0+OpenCV3.4.3+PIL
Python
43
star
24

hand-gesture-recognition-using-onnx

This is a hand gesture recognition program that replaces the entire MediaPipe process with ONNX. Simultaneous detection of multiple palms and a simple tracker are additionally implemented. In addition, a simple MLP can learn and recognize gestures.
Jupyter Notebook
40
star
25

TPU-Posenet

Edge TPU Accelerator / Multi-TPU / Multi-Model + Posenet/DeeplabV3/MobileNet-SSD + Python + Sync / Async + LaptopPC / RaspberryPi
Python
39
star
26

faster-whisper-env

An environment where you can try out faster-whisper immediately.
Python
36
star
27

facemesh_onnx_tensorrt

Verify that the post-processing merged into FaceMesh works correctly. The object detection model can be anything other than BlazeFace. YOLOv4 and FaceMesh committed to this repository have modified post-processing.
Python
32
star
28

yolact_edge_onnx_tensorrt_myriad

Provides a conversion flow for YOLACT_Edge to models compatible with ONNX, TensorRT, OpenVINO and Myriad (OAK). My own implementation of post-processing allows for e2e inference. Support for Multi-Class NonMaximumSuppression, CombinedNonMaxSuppression.
Python
27
star
29

crowdhuman_hollywoodhead_yolo_convert

YOLOv7 training. Generates a head-only dataset in YOLO format. The labels included in the CrowdHuman dataset are Head and FullBody, but ignore FullBody.
Python
27
star
30

simple_fisheye_calibrator

Simple GUI-based correction of fisheye images. The correction parameters specified on the screen can be diverted to opencv's fisheye correction parameters. Supports execution via Docker.
Python
26
star
31

onnx2json

Exports the ONNX file to a JSON file and JSON dict.
Python
26
star
32

tflite2json2tflite

Convert tflite to JSON and make it editable in the IDE. It also converts the edited JSON back to tflite binary.
Dockerfile
26
star
33

mtomo

Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.
Dockerfile
24
star
34

MobileNetv2-SSDLite

My proprietary procedure. Caffe implementation of SSD and SSDLite detection on MobileNetv2, converted from tensorflow.
Python
24
star
35

Bazel_bin

Bazel's pre-built binaries for armv7l / aarch64 / x86_64.
Shell
22
star
36

BoT-SORT-ONNX-TensorRT

BoT-SORT + YOLOX implemented using only onnxruntime, Numpy and scipy, without cython_bbox and PyTorch. Fast human tracker. OSNet is not used.
Python
21
star
37

pytorch4raspberrypi

Cross-compilation of PyTorch armv7l (32bit) for RaspberryPi OS
Dockerfile
20
star
38

scc4onnx

Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.
Python
20
star
39

onnxruntime4raspberrypi

onnxruntime for RaspberryPi armv7l
19
star
40

zumo32u4

Zumo32u4(ATmega32u4) + RaspberryPi3(RaspberryPi) + SLAM(CartoGrapher/Gmapping) + RPLiDAR A1M8
Python
19
star
41

20220228_intel_deeplearning_day_hitnet_demo

Special Presentation Demo at Intel IoT Planet 2021 DeepLearning Day / インテル IoT プラネット 2021 DeepLearning Dayの特別講演の発表資料 https://www.intel.co.jp/content/www/jp/ja/now/iot-planet/deep-learning-day.html
Python
19
star
42

TensorflowLite-flexdelegate

This is a repository for checking the operation of Flex Delegate of Tensorflow.
C++
19
star
43

sit4onnx

Tools for simple inference testing using TensorRT, CUDA and OpenVINO CPU/GPU and CPU providers. Simple Inference Test for ONNX.
Python
18
star
44

spo4onnx

Simple tool for partial optimization of ONNX. Further optimize some models that cannot be optimized with onnx-optimizer and onnxsim by several tens of percent. In particular, models containing Einsum and OneHot.
Python
18
star
45

hand_landmark

HandLandmark Detection that can be performed only in onnxruntime. Pre-focusing by skeletal detection is not performed. This does not use MediaPipe.
Python
17
star
46

json2onnx

Converts a JSON file to an ONNX file.
Python
16
star
47

snc4onnx

Simple tool to combine(merge) onnx models. Simple Network Combine Tool for ONNX.
Python
15
star
48

sio4onnx

Simple tool to change the INPUT and OUTPUT shape of ONNX.
Python
14
star
49

Open3D-build

Provide Docker build sequences of Open3D for various environments.
Dockerfile
14
star
50

gesture-drone

Drone + OpenVINO + ObjectDetection + FaceRecognition + MobileNetV2 PoseEstiation
Python
14
star
51

OpenVINO-ADAS

[1 FPS / CPU only] OpenVINO+ADAS+LattePandaAlpha. CPU / GPU / NCS. RealTime semantic-segmentaion. Python3.5+OpenCV3.4.3+PIL
Python
14
star
52

OpenVINO-bin

OpenVINO installer storage location (Full version)
Shell
13
star
53

sne4onnx

A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or simply to separate onnx files to any size you want.
Python
13
star
54

jetson-tensorflow-pytorch-build

Provides an environment for compiling TensorFlow or PyTorch with CUDA for aarch64 on an x86 machine. This is for Jetson. If you build using an EC2 m6g.16xlarge (aarch64) instance, TensorFlow can be fully built in about 30 minutes. It can be used as a cross-compilation environment not only for TensorFlow and PyTorch, but also for various other packages and libraries.
Dockerfile
13
star
55

PyTorch-build

Provide Docker build sequences of PyTorch for various environments.
Dockerfile
11
star
56

Face_Mask_Augmentation

Masked Face Image Augmentation Tool for Dataset 300W-LP with 6D Head Pose Information.
Python
11
star
57

sam4onnx

A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for ONNX.
Python
11
star
58

DirectMHP_YOLOv7

I just replaced the DirectMHP backend from YOLOv5 to YOLOv7.
Python
10
star
59

onnx-speech-language-detection

A simple program that returns RFC5646 style language codes and country code symbols from microphone input or wav byte arrays. e.g. ja-JP, en-US, ...
Python
9
star
60

components_of_onnx

[WIP] ONNX parts yard. The various operations described in Operator Schemas are converted in advance into OP stand-alone ONNX files.
Python
8
star
61

tflite-input-output-rewriter

This tool displays tflite signatures and rewrites the input/output OP name to the name of the signature. There is no need to install TensorFlow or TFLite.
Python
8
star
62

rtspserver-ffmpeg

Build an ffmpeg RTSP distribution server using an old alpine:3.8 Docker Image.
Python
8
star
63

human-pose-estimation-3d-python-cpp

Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.
Python
8
star
64

soc4onnx

A very simple tool that forces a change in the opset of an ONNX graph. Simple Opset Changer for ONNX.
Python
7
star
65

sbi4onnx

A very simple script that only initializes the batch size of ONNX. Simple Batchsize Initialization for ONNX.
Python
7
star
66

sog4onnx

Simple ONNX operation generator. Simple Operation Generator for ONNX.
Python
7
star
67

snd4onnx

Simple node deletion tool for onnx.
Python
7
star
68

simple_camera_capture

Very simple recording tool using only OpenCV. Automatically record the camera capture to mp4, press C key or left mouse button click captures the image.
Python
7
star
69

mmaction2-onnx-export-env

ONNX export environment for mmaction2
Dockerfile
7
star
70

RaspberryPi-bin

OS image repository for RaspberryPi3.
Shell
6
star
71

sed4onnx

Simple ONNX constant encoder/decoder. Since the constant values in the JSON files generated by onnx2json are Base64-encoded values, ASCII <-> Base64 conversion is required when rewriting JSON constant values.
Python
6
star
72

TinyYolo

Challenge the marginal performance of YoloV2 + Neural Compute Stick + RaspberryPi YoloV2+Neural Compute Stick(NCS)+Raspberry Piの限界性能に挑戦
Python
5
star
73

PINTO0309

5
star
74

sor4onnx

Simple OP Renamer for ONNX.
Python
5
star
75

OpenCVonARMv7

Deb package for introducing OpenCV to RaspberryPi3.
5
star
76

sna4onnx

Simple node addition tool for onnx. Simple Node Addition for ONNX.
Python
5
star
77

ssc4onnx

Checker with simple ONNX model structure. Simple Structure Checker for ONNX.
Python
5
star
78

realsense-cuda-opengl-docker

RealSense execution environment built on a Docker container on Ubuntu 20.04. NIVIDA GPU and OpenGL capable. CUADA 11.4.
Dockerfile
5
star
79

simple-ros2-processing-tools

A set of simple tools for ROS2 of my own making.
Python
5
star
80

300W_LP_AFLW2000_viewer

Python
4
star
81

soa4onnx

Simple model Output OP Additional tools for ONNX.
Python
4
star
82

mmrotate-exec-env

Execution environment of mmrotate
Dockerfile
3
star
83

tvm-build

TVM build and run test environment
Dockerfile
3
star
84

ssi4onnx

Simple Shape Inference tool for ONNX.
Python
3
star
85

edgetpu-bin

Prebuilt binary for EdgeTPU PythonAPI standalone installer.
3
star
86

NITEC-ONNX-TensorRT

ONNX implementation of "NITEC: Versatile Hand-Annotated Eye Contact Dataset for Ego-Vision Interaction" https://github.com/thohemp/nitec
Python
3
star
87

Human-Face-Crop-ONNX-TensorRT

Simply crop the face from the image at high speed and save.
Python
3
star
88

Maxine-env

NVIDIA Maxine - A playground for running the Audio Effects SDK.
Dockerfile
2
star
89

TBBonARMv7

RaspberryPi3へのTBB(Intel Threading Building Blocks)導入用debパッケージ保管庫
2
star
90

sod4onnx

Simple model Output OP Deletion tools for ONNX.
Python
2
star
91

sic4onnx

A very simple tool that forces a change in the IR Version of an ONNX graph. Simple IR version Changer for ONNX.
Python
2
star
92

rtspserver-v4l2

RTSP distribution server for USB camera video using v4l2 with Docker container on Ubuntu 20.04.
Python
2
star
93

YoloTrainDataGenerate

Procedures and tools for semi-mechanically automatically generating YoloV2 original learning data from video.
Python
2
star
94

rosdepth2mp4

A simple tool to record ROS2 Image topics to MP4.
Python
2
star
95

ZED2-Docker

ZED2 SDK Installed Containers
Dockerfile
2
star
96

sde4onnx

Simple doc_string eraser for ONNX.
Python
1
star
97

DeepLearningMugenKnock

Python
1
star
98

SegNet-TF

Tensorflow implementation of SegNet Tensorflow 1.11.0 + Python (I made minor bugfixes for toimcio/SegNet-tensorflow)
Jupyter Notebook
1
star
99

yolov9-wholebody25-tensorflowjs-web-test

A test environment running yolov9-wholebody25 on TensorFlow.js.
HTML
1
star
100

svs4onnx

A very simple tool to swap connections between output and input variables in an ONNX graph. Simple Variable Switch for ONNX.
Python
1
star