• Stars
    star
    239
  • Rank 168,763 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 4 years ago
  • Updated about 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support. Supports inverse quantization of INT8 quantization model.

tflite2tensorflow

Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support. Supports inverse quantization of INT8 quantization model.

Special custom TensorFlow binaries and special custom TensorFLow Lite binaries are used.

Downloads GitHub PyPI CodeQL

1. Supported Layers

Supported Layers

No. TFLite Layer TF Layer Remarks
1 CONV_2D tf.nn.conv2d
2 DEPTHWISE_CONV_2D tf.nn.depthwise_conv2d
3 MAX_POOL_2D tf.nn.max_pool
4 PAD tf.pad
5 MIRROR_PAD tf.raw_ops.MirrorPad
6 RELU tf.nn.relu
7 PRELU tf.keras.layers.PReLU
8 RELU6 tf.nn.relu6
9 RESHAPE tf.reshape
10 ADD tf.add
11 SUB tf.math.subtract
12 CONCATENATION tf.concat
13 LOGISTIC tf.math.sigmoid
14 TRANSPOSE_CONV tf.nn.conv2d_transpose
15 MUL tf.multiply
16 HARD_SWISH x*tf.nn.relu6(x+3)*0.16666667 Or x*tf.nn.relu6(x+3)*0.16666666
17 AVERAGE_POOL_2D tf.keras.layers.AveragePooling2D
18 FULLY_CONNECTED tf.keras.layers.Dense
19 RESIZE_BILINEAR tf.image.resize Or tf.image.resize_bilinear The behavior differs depending on the optimization options of openvino and edgetpu.
20 RESIZE_NEAREST_NEIGHBOR tf.image.resize Or tf.image.resize_nearest_neighbor The behavior differs depending on the optimization options of openvino and edgetpu.
21 MEAN tf.math.reduce_mean
22 SQUARED_DIFFERENCE tf.math.squared_difference
23 RSQRT tf.math.rsqrt
24 DEQUANTIZE (const)
25 FLOOR tf.math.floor
26 TANH tf.math.tanh
27 DIV tf.math.divide
28 FLOOR_DIV tf.math.floordiv
29 SUM tf.math.reduce_sum
30 POW tf.math.pow
31 SPLIT tf.split
32 SOFTMAX tf.nn.softmax
33 STRIDED_SLICE tf.strided_slice
34 TRANSPOSE ttf.transpose
35 SPACE_TO_DEPTH tf.nn.space_to_depth
36 DEPTH_TO_SPACE tf.nn.depth_to_space
37 REDUCE_MAX tf.math.reduce_max
38 Convolution2DTransposeBias tf.nn.conv2d_transpose, tf.math.add CUSTOM, MediaPipe
39 LEAKY_RELU tf.keras.layers.LeakyReLU
40 MAXIMUM tf.math.maximum
41 MINIMUM tf.math.minimum
42 MaxPoolingWithArgmax2D tf.raw_ops.MaxPoolWithArgmax CUSTOM, MediaPipe
43 MaxUnpooling2D tf.cast, tf.shape, tf.math.floordiv, tf.math.floormod, tf.ones_like, tf.shape, tf.concat, tf.reshape, tf.transpose, tf.scatter_nd CUSTOM, MediaPipe
44 GATHER tf.gather
45 CAST tf.cast
46 SLICE tf.slice
47 PACK tf.stack
48 UNPACK tf.unstack
49 ARG_MAX tf.math.argmax Or tf.math.reduce_max, tf.subtract, tf.math.minimum, tf.multiply The behavior differs depending on the optimization options of edgetpu.
50 EXP tf.exp
51 TOPK_V2 tf.math.top_k
52 LOG_SOFTMAX tf.nn.log_softmax
53 L2_NORMALIZATION tf.math.l2_normalize
54 LESS tf.math.less
55 LESS_EQUAL tf.math.less_equal
56 GREATER tf.math.greater
57 GREATER_EQUAL tf.math.greater_equal
58 NEG tf.math.negative
59 WHERE tf.where
60 SELECT tf.where
61 SELECT_V2 tf.where
62 PADV2 tf.raw_ops.PadV2
63 SIN tf.math.sin
64 TILE tf.tile
65 EQUAL tf.math.equal
66 NOT_EQUAL tf.math.not_equal
67 LOG tf.math.log
68 SQRT tf.math.sqrt
69 ARG_MIN tf.math.argmin or tf.math.negative,tf.math.argmax
70 REDUCE_PROD tf.math.reduce_prod
71 LOGICAL_OR tf.math.logical_or
72 LOGICAL_AND tf.math.logical_and
73 LOGICAL_NOT tf.math.logical_not
74 REDUCE_MIN tf.math.reduce_min or tf.math.negative,tf.math.reduce_max
75 REDUCE_ANY tf.math.reduce_any
76 SQUARE tf.math.square
77 ZEROS_LIKE tf.zeros_like
78 FILL tf.fill
79 FLOOR_MOD tf.math.floormod
80 RANGE tf.range
81 ABS tf.math.abs
82 UNIQUE tf.unique
83 CEIL tf.math.ceil
84 REVERSE_V2 tf.reverse
85 ADD_N tf.math.add_n
86 GATHER_ND tf.gather_nd
87 COS tf.math.cos
88 RANK tf.math.rank
89 ELU tf.nn.elu
90 WHILE tf.while_loop
91 REVERSE_SEQUENCE tf.reverse_sequence
92 MATRIX_DIAG tf.linalg.diag
93 ROUND tf.math.round
94 NON_MAX_SUPPRESSION_V4 tf.raw_ops.NonMaxSuppressionV4
95 NON_MAX_SUPPRESSION_V5 tf.raw_ops.NonMaxSuppressionV5, tf.raw_ops.NonMaxSuppressionV4, tf.raw_ops.NonMaxSuppressionV3
96 SCATTER_ND tf.scatter_nd
97 SEGMENT_SUM tf.math.segment_sum
98 CUMSUM tf.math.cumsum
99 BROADCAST_TO tf.broadcast_to
100 RFFT2D tf.signal.rfft2d
101 L2_POOL_2D tf.square, tf.keras.layers.AveragePooling2D, tf.sqrt
102 LOCAL_RESPONSE_NORMALIZATION tf.nn.local_response_normalization
103 RELU_N1_TO_1 tf.minimum, tf.maximum
104 SPLIT_V tf.raw_ops.SplitV
105 MATRIX_SET_DIAG tf.linalg.set_diag
106 SHAPE tf.shape
107 EXPAND_DIMS tf.expand_dims
108 SQUEEZE tf.squeeze
109 FlexRFFT tf.signal.rfft Flex OP
110 FlexImag tf.math.imag Flex OP
111 FlexReal tf.math.real Flex OP
112 FlexRFFT2D tf.signal.rfft2d Flex OP
113 FlexComplexAbs tf.raw_ops.ComplexAbs Flex OP
114 IMAG tf.math.imag
115 REAL tf.math.real
116 COMPLEX_ABS tf.raw_ops.ComplexAbs
117 TFLite_Detection_PostProcess tf.divide, tf.strided_slice, tf.math.argmax, tf.math.reduce_max, tf.math.multiply, tf.math.add, tf.math.exp, tf.math.subtract, tf.expand_dims, tf.gather, tf.reshape, tf.identity, tf.raw_ops.NonMaxSuppressionV5 CUSTOM
118 ONE_HOT tf.one_hot
119 FlexMultinomial tf.random.categorical Flex OP
120 FlexAll tf.math.reduce_all Flex OP
121 FlexErf tf.math.erf Flex OP
122 FlexRoll tf.roll Flex OP
123 CONV_3D tf.keras.layers.Conv3D
124 CONV_3D_TRANSPOSE tf.nn.conv3d_transpose
125 Densify (const)
126 SPACE_TO_BATCH_ND tf.space_to_batch_nd
127 BATCH_TO_SPACE_ND tf.compat.v1.batch_to_space_nd
128 TransformLandmarks tf.reshape, tf.linalg.matmul, tf.math.add CUSTOM, MediaPipe
129 TransformTensorBilinear tf.reshape, tf.linalg.matmul, tf.math.add, tf.tile, tf.math.floor, tf.math.subtract, tf.math.multiply, tf.math.reduce_prod, tf.cast, tf.math.maximum, tf.math.maximum, tf.concat, tf.gather_nd CUSTOM, MediaPipe
130 Landmarks2TransformMatrix tf.constant, tf.math.subtract, tf.math.norm, tf.math.divide, tf.linalg.matmul, tf.concat, tf.transpose, tf.gather, tf.math.reduce_min, tf.math.reduce_max, tf.math.multiply, tf.zeros, tf.math.add, tf.tile CUSTOM, MediaPipe

2. Environment

  • Python3.8+
  • TensorFlow v2.9.0+
  • TensorFlow Lite v2.9.0 with MediaPipe Custom OP, FlexDelegate and XNNPACK enabled
  • flatc v2.0.8
  • PyTorch v1.12.0 (with grid_sample)
  • TorchVision
  • TorchAudio
  • OpenVINO 2021.4.582+
  • TensorRT 8.4+
  • trtexec
  • pycuda 2021.1
  • tensorflowjs
  • coremltools
  • paddle2onnx
  • onnx
  • onnxruntime-gpu (CUDA, TensorRT, OpenVINO)
  • onnxruntime-extensions
  • onnx_graphsurgeon
  • onnx-simplifier
  • onnxconverter-common
  • onnxmltools
  • onnx-tensorrt
  • tf2onnx
  • torch2trt
  • onnx-tf
  • tensorflow-datasets
  • tf_slim
  • edgetpu_compiler
  • tflite2tensorflow
  • openvino2tensorflow
  • simple-onnx-processing-tools
  • gdown
  • pandas
  • matplotlib
  • paddlepaddle
  • paddle2onnx
  • pycocotools
  • scipy
  • Intel-Media-SDK
  • Intel iHD GPU (iGPU) support
  • OpenCL
  • gluoncv
  • LLVM
  • NNPACK
  • WSL2 OpenCL

3. Setup

3-1. [Environment construction pattern 1] Execution by Docker (strongly recommended)

You do not need to install any packages other than Docker. It consumes about 26.7GB of host storage.

$ docker pull ghcr.io/pinto0309/tflite2tensorflow:latest
or
$ docker build -t ghcr.io/pinto0309/tflite2tensorflow:latest .

# If you don't need to access the GUI of the HostPC and the USB camera.
$ docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  ghcr.io/pinto0309/tflite2tensorflow:latest

# If conversion to TF-TRT is not required. And if you need to access the HostPC GUI and USB camera.
$ xhost +local: && \
  docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
  --device /dev/video0:/dev/video0:mwr \
  --net=host \
  -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
  -e DISPLAY=$DISPLAY \
  --privileged \
  ghcr.io/pinto0309/tflite2tensorflow:latest

# If you need to convert to TF-TRT. And if you need to access the HostPC GUI and USB camera.
$ xhost +local: && \
  docker run --gpus all -it --rm \
  -v `pwd`:/home/user/workdir \
  -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
  --device /dev/video0:/dev/video0:mwr \
  --net=host \
  -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
  -e DISPLAY=$DISPLAY \
  --privileged \
  ghcr.io/pinto0309/tflite2tensorflow:latest

# If you are using iGPU (OpenCL). And if you need to access the HostPC GUI and USB camera.
$ xhost +local: && \
  docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
  --device /dev/video0:/dev/video0:mwr \
  --net=host \
  -e LIBVA_DRIVER_NAME=iHD \
  -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
  -e DISPLAY=$DISPLAY \
  --privileged \
  ghcr.io/pinto0309/tflite2tensorflow:latest

3-2. [Environment construction pattern 2] Execution by Host machine

To install using the Python Package Index (PyPI), use the following command.

$ pip3 install --user --upgrade tflite2tensorflow

Or, To install with the latest source code of the main branch, use the following command.

$ pip3 install --user --upgrade git+https://github.com/PINTO0309/tflite2tensorflow

Installs a customized TensorFlow Lite runtime with support for MediaPipe Custom OP, FlexDelegate, and XNNPACK. If tflite_runtime does not install properly, please follow the instructions in the next article to build a custom build in the environment you are using. Add a custom OP to the TFLite runtime to build the whl installer (for Python), MaxPoolingWithArgmax2D, MaxUnpooling2D, Convolution2DTransposeBias, TransformLandmarks, TransformTensorBilinear, Landmarks2TransformMatrix

$ sudo pip3 uninstall -y \
    tensorboard-plugin-wit \
    tb-nightly \
    tensorboard \
    tf-estimator-nightly \
    tensorflow-gpu \
    tensorflow \
    tf-nightly \
    tensorflow_estimator \
    tflite_runtime

$ APPVER=v1.20.7
$ TENSORFLOWVER=2.8.0

### Customized version of TensorFlow Lite installation
$ wget https://github.com/PINTO0309/tflite2tensorflow/releases/download/${APPVER}/tflite_runtime-${TENSORFLOWVER}-cp38-none-linux_x86_64.whl \
  && sudo chmod +x tflite_runtime-${TENSORFLOWVER}-cp38-none-linux_x86_64.whl \
  && pip3 install --user --force-reinstall tflite_runtime-${TENSORFLOWVER}-cp38-none-linux_x86_64.whl \
  && rm tflite_runtime-${TENSORFLOWVER}-cp38-none-linux_x86_64.whl

### Install the Customized Full TensorFlow package
### (MediaPipe Custom OP, FlexDelegate, XNNPACK enabled)
$ wget https://github.com/PINTO0309/tflite2tensorflow/releases/download/${APPVER}/tflite_runtime-${TENSORFLOWVER}-cp38-none-linux_x86_64.whl \
  && sudo chmod +x tensorflow-${TENSORFLOWVER}-cp38-none-linux_x86_64.whl \
  && pip3 install --user --force-reinstall tensorflow-${TENSORFLOWVER}-cp38-none-linux_x86_64.whl \
  && rm tensorflow-${TENSORFLOWVER}-cp38-none-linux_x86_64.whl

 or

### Install the Non-customized TensorFlow package
$ pip3 install --user tf-nightly

### Download schema.fbs
$ wget https://github.com/PINTO0309/tflite2tensorflow/raw/main/schema/schema.fbs

Build flatc

$ git clone -b v2.0.8 https://github.com/google/flatbuffers.git
$ cd flatbuffers && mkdir build && cd build
$ cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Release ..
$ make -j$(nproc)

vvtvsu0y1791ow2ybdk61s9fv7e4 saxqukktcjncsk2hp7m8p2cns4q4

The Windows version of flatc v2.0.8 can be downloaded from here. https://github.com/google/flatbuffers/releases/download/v2.0.8/Windows.flatc.binary.zip

4. Usage / Execution sample

4-1. Command line options

usage: tflite2tensorflow
  [-h]
  --model_path MODEL_PATH
  --flatc_path FLATC_PATH
  --schema_path SCHEMA_PATH
  [--model_output_path MODEL_OUTPUT_PATH]
  [--output_pb]
  [--output_no_quant_float32_tflite]
  [--output_dynamic_range_quant_tflite]
  [--output_weight_quant_tflite]
  [--output_float16_quant_tflite]
  [--output_integer_quant_tflite]
  [--output_full_integer_quant_tflite]
  [--output_integer_quant_type]
  [--string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION]
  [--calib_ds_type CALIB_DS_TYPE]
  [--ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION]
  [--split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION]
  [--download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS]
  [--tfds_download_flg]
  [--load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY]
  [--output_tfjs]
  [--output_tftrt_float32]
  [--output_tftrt_float16]
  [--output_coreml]
  [--optimizing_coreml]
  [--output_edgetpu]
  [--edgetpu_compiler_timeout EDGETPU_COMPILER_TIMEOUT]
  [--edgetpu_num_segments EDGETPU_NUM_SEGMENTS]
  [--output_onnx]
  [--onnx_opset ONNX_OPSET]
  [--onnx_extra_opset ONNX_EXTRA_OPSET]
  [--disable_onnx_nchw_conversion]
  [--disable_onnx_optimization]
  [--output_openvino_and_myriad]
  [--vpu_number_of_shaves VPU_NUMBER_OF_SHAVES]
  [--vpu_number_of_cmx_slices VPU_NUMBER_OF_CMX_SLICES]
  [--optimizing_for_openvino_and_myriad]
  [--rigorous_optimization_for_myriad]
  [--replace_swish_and_hardswish]
  [--optimizing_for_edgetpu]
  [--replace_prelu_and_minmax]
  [--disable_experimental_new_quantizer]
  [--disable_per_channel]
  [--optimizing_barracuda]
  [--locationids_of_the_terminating_output]

optional arguments:
  -h, --help
          show this help message and exit
  --model_path MODEL_PATH
          input tflite model path (*.tflite)
  --flatc_path FLATC_PATH
          flatc file path (flatc)
  --schema_path SCHEMA_PATH
          schema.fbs path (schema.fbs)
  --model_output_path MODEL_OUTPUT_PATH
          The output folder path of the converted model file
  --output_pb
          .pb output switch
  --output_no_quant_float32_tflite
          float32 tflite output switch
  --output_dynamic_range_quant_tflite
          dynamic range quant tflite output switch
  --output_weight_quant_tflite
          weight quant tflite output switch
  --output_float16_quant_tflite
          float16 quant tflite output switch
  --output_integer_quant_tflite
          integer quant tflite output switch
  --output_full_integer_quant_tflite
          full integer quant tflite output switch
  --output_integer_quant_type OUTPUT_INTEGER_QUANT_TYPE
          Input and output types when doing Integer Quantization
          ('int8 (default)' or 'uint8')
  --string_formulas_for_normalization STRING_FORMULAS_FOR_NORMALIZATION
          String formulas for normalization. It is evaluated by
          Python's eval() function. Default: '(data -
          [127.5,127.5,127.5]) / [127.5,127.5,127.5]'
  --calib_ds_type CALIB_DS_TYPE
          Types of data sets for calibration. tfds or numpy
          Default: numpy
  --ds_name_for_tfds_for_calibration DS_NAME_FOR_TFDS_FOR_CALIBRATION
          Dataset name for TensorFlow Datasets for calibration.
          https://www.tensorflow.org/datasets/catalog/overview
  --split_name_for_tfds_for_calibration SPLIT_NAME_FOR_TFDS_FOR_CALIBRATION
          Split name for TensorFlow Datasets for calibration.
          https://www.tensorflow.org/datasets/catalog/overview
  --download_dest_folder_path_for_the_calib_tfds DOWNLOAD_DEST_FOLDER_PATH_FOR_THE_CALIB_TFDS
          Download destination folder path for the calibration
          dataset. Default: $HOME/TFDS
  --tfds_download_flg
          True to automatically download datasets from
          TensorFlow Datasets. True or False
  --load_dest_file_path_for_the_calib_npy LOAD_DEST_FILE_PATH_FOR_THE_CALIB_NPY
          The path from which to load the .npy file containing
          the numpy binary version of the calibration data.
          Default: sample_npy/calibration_data_img_sample.npy
          [20, 513, 513, 3] -> [Number of images, h, w, c]
  --output_tfjs
          tfjs model output switch
  --output_tftrt32
          tftrt float32 model output switch
  --output_tftrt16
          tftrt float16 model output switch
  --output_coreml
          coreml model output switch
  --optimizing_for_coreml
          Optimizing graph for coreml
  --output_edgetpu
          edgetpu model output switch
  --edgetpu_compiler_timeout
          edgetpu_compiler timeout for one compilation process in seconds.
          Default: 3600
  --edgetpu_num_segments
          Partition the model into 'num_segments' segments.
          Default: 1 (no partition)
  --output_onnx
          onnx model output switch
  --onnx_opset ONNX_OPSET
          onnx opset version number
  --onnx_extra_opset ONNX_EXTRA_OPSET
          The name of the onnx 'extra_opset' to enable.
          Default: ''
          'com.microsoft:1' or 'ai.onnx.contrib:1' or 'ai.onnx.ml:1'
  --disable_onnx_nchw_conversion
          Disable onnx NCHW conversion
  --disable_onnx_optimization
          Disable onnx optimization
  --output_openvino_and_myriad
          openvino model and myriad inference engine blob output switch
  --vpu_number_of_shaves VPU_NUMBER_OF_SHAVES
          vpu number of shaves. Default: 4
  --vpu_number_of_cmx_slices VPU_NUMBER_OF_CMX_SLICES
          vpu number of cmx slices. Default: 4
  --optimizing_for_openvino_and_myriad
          Optimizing graph for openvino/myriad
  --rigorous_optimization_for_myriad
          Replace operations that are not supported by myriad with operations
          that are as feasible as possible.
          e.g. 'Abs' -> 'Square' + 'Sqrt'
  --replace_swish_and_hardswish
          Replace swish and hard-swish with each other
  --optimizing_for_edgetpu
          Optimizing for edgetpu
  --replace_prelu_and_minmax
          Replace prelu and minimum/maximum with each other
  --disable_experimental_new_quantizer
          Disable MLIRs new quantization feature during INT8 quantization
          in TensorFlowLite.
  --disable_per_channel
          Disable per-channel quantization for tflite.
  --optimizing_barracuda
          Generates ONNX by replacing Barracuda unsupported layers
          with standard layers. For example, GatherND.
  --locationids_of_the_terminating_output
          A comma-separated list of LocationIDs to be used as output layers.
          e.g. --locationids_of_the_terminating_output 100,201,560
          Default: ''

4-2. Step 1 : Generating saved_model and FreezeGraph (.pb)

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_pb

or

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_pb \
  --optimizing_for_openvino_and_myriad

or

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_pb \
  --optimizing_for_openvino_and_myriad \
  --rigorous_optimization_for_myriad

or

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_pb \
  --optimizing_for_edgetpu

or

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_pb \
  --optimizing_for_coreml

or

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_pb \
  --optimizing_barracuda

4-3. Step 2 : Generation of quantized tflite, TFJS, TF-TRT, EdgeTPU, CoreML and ONNX

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_no_quant_float32_tflite \
  --output_dynamic_range_quant_tflite \
  --output_weight_quant_tflite \
  --output_float16_quant_tflite \
  --output_integer_quant_tflite \
  --string_formulas_for_normalization 'data / 255.0' \
  --output_tfjs \
  --output_coreml \
  --output_tftrt_float32 \
  --output_tftrt_float16 \
  --output_onnx \
  --onnx_opset 11 \
  --output_openvino_and_myriad

or

$ tflite2tensorflow \
  --model_path segm_full_v679.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --output_no_quant_float32_tflite \
  --output_dynamic_range_quant_tflite \
  --output_weight_quant_tflite \
  --output_float16_quant_tflite \
  --output_integer_quant_tflite \
  --output_edgetpu \
  --output_integer_quant_typ 'uint8' \
  --string_formulas_for_normalization 'data / 255.0' \
  --output_tfjs \
  --output_coreml \
  --output_tftrt_float32 \
  --output_tftrt_float16 \
  --output_onnx \
  --onnx_opset 11

4-4. Check the contents of the .npy file, which is a binary version of the image file

$ view_npy --npy_file_path calibration_data_img_sample.npy

Press the Q button to display the next image. calibration_data_img_sample.npy contains 20 images extracted from the MS-COCO data set.

image

5. Sample image

This is the result of converting MediaPipe's Meet Segmentation model (segm_full_v679.tflite / Float16 / Google Meet) to saved_model and then reconverting it to Float32 tflite. Replace the GPU-optimized Convolution2DTransposeBias layer with the standard TransposeConv and BiasAdd layers in a fully automatic manner. The weights and biases of the Float16 Dequantize layer are automatically back-quantized to Float32 precision. The generated saved_model in Float32 precision can be easily converted to Float16, INT8, EdgeTPU, TFJS, TF-TRT, CoreML, ONNX, OpenVINO, Myriad Inference Engine blob.

Before After
segm_full_v679 tflite model_float32 tflite

More Repositories

1

PINTO_model_zoo

A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.
Python
3,504
star
2

onnx2tf

Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massive Transpose extrapolation problem in onnx-tensorflow (onnx-tf). I don't need a Star, but give me a pull request.
Python
669
star
3

OpenVINO-YoloV3

YoloV3/tiny-YoloV3+RaspberryPi3/Ubuntu LaptopPC+NCS/NCS2+USB Camera+Python+OpenVINO
Python
535
star
4

Tensorflow-bin

Prebuilt binary with Tensorflow Lite enabled. For RaspberryPi / Jetson Nano. Support for custom operations in MediaPipe. XNNPACK, XNNPACK Multi-Threads, FlexDelegate.
Shell
478
star
5

MobileNet-SSD-RealSense

[High Performance / MAX 30 FPS] RaspberryPi3(RaspberryPi/Raspbian Stretch) or Ubuntu + Multi Neural Compute Stick(NCS/NCS2) + RealSense D435(or USB Camera or PiCamera) + MobileNet-SSD(MobileNetSSD) + Background Multi-transparent(Simple multi-class segmentation) + FaceDetection + MultiGraph + MultiProcessing + MultiClustering
Python
355
star
6

openvino2tensorflow

This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and pb. PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW). And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and saved_model to .tflite and saved_model to onnx. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support.
Python
318
star
7

simple-onnx-processing-tools

A set of simple tools for splitting, merging, OP deletion, size compression, rewriting attributes and constants, OP generation, change opset, change to the specified input order, addition of OP, RGB to BGR conversion, change batch size, batch rename of OP, and JSON convertion for ONNX models.
Python
266
star
8

TensorflowLite-bin

Prebuilt binary for TensorFlowLite's standalone installer. For RaspberryPi. A very lightweight installer. I provide a FlexDelegate, MediaPipe Custom OP and XNNPACK enabled binary.
Shell
181
star
9

Keras-OneClassAnomalyDetection

[5 FPS - 150 FPS] Learning Deep Features for One-Class Classification (AnomalyDetection). Corresponds RaspberryPi3. Convert to Tensorflow, ONNX, Caffe, PyTorch. Implementation by Python + OpenVINO/Tensorflow Lite.
Jupyter Notebook
120
star
10

mediapipe-bin

MediaPipe Python Wheel installer for RaspberryPi OS aarch64, Ubuntu aarch64, Debian aarch64 and Jetson Nano.
Shell
113
star
11

MobileNetV2-PoseEstimation

Tensorflow based Fast Pose estimation. OpenVINO, Tensorflow Lite, NCS, NCS2 + Python.
Python
105
star
12

MobileNet-SSD

MobileNet-SSD(MobileNetSSD) + Neural Compute Stick(NCS) Faster than YoloV2 + Explosion speed by RaspberryPi · Multiple moving object detection with high accuracy.
Python
92
star
13

wsl2_linux_kernel_usbcam_enable_conf

Configuration file to build the kernel to access the USB camera connected to the host PC using USBIP from inside the WSL2 Ubuntu 20.04/22.04.
Python
88
star
14

TPU-MobilenetSSD

Edge TPU Accelerator / Multi-TPU + MobileNet-SSD v2 + Python + Async + LattePandaAlpha/RaspberryPi3/LaptopPC
Python
81
star
15

whisper-onnx-cpu

ONNX implementation of Whisper. PyTorch free.
Python
79
star
16

TensorflowLite-UNet

Implementation of UNet by Tensorflow Lite. Semantic segmentation without using GPU with RaspberryPi + Python. In order to maximize the learning efficiency of the model, this learns only the "Person" class of VOC2012. And Comparison with ENet.
Python
78
star
17

HeadPoseEstimation-WHENet-yolov4-onnx-openvino

WHENet - ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L
Python
66
star
18

DMHead

Dual model head pose estimation. Fusion of SOTA models. 360° 6D HeadPose detection. All pre-processing and post-processing are fused together, allowing end-to-end processing in a single inference.
Python
65
star
19

OpenVINO-EmotionRecognition

OpenVINO+NCS2/NCS+MutiModel(FaceDetection, EmotionRecognition)+MultiStick+MultiProcess+MultiThread+USB Camera/PiCamera. RaspberryPi 3 compatible. Async.
Python
56
star
20

whisper-onnx-tensorrt

ONNX and TensorRT implementation of Whisper
Python
55
star
21

scs4onnx

A very simple tool that compresses the overall size of the ONNX model by aggregating duplicate constant values as much as possible.
Python
51
star
22

MobileNet-SSDLite-RealSense-TF

RaspberryPi3(Raspbian Stretch) + MobileNetv2-SSDLite(Tensorflow/MobileNetv2SSDLite) + RealSense D435 + Tensorflow1.11.0 + without Neural Compute Stick(NCS)
Python
51
star
23

OpenVINO-DeeplabV3

[4-5 FPS / Core m3 CPU only] [11 FPS / Core i7 CPU only] OpenVINO+DeeplabV3+LattePandaAlpha/LaptopPC. CPU / GPU / NCS. RealTime semantic-segmentaion. Python3.5+Tensorflow v1.11.0+OpenCV3.4.3+PIL
Python
43
star
24

hand-gesture-recognition-using-onnx

This is a hand gesture recognition program that replaces the entire MediaPipe process with ONNX. Simultaneous detection of multiple palms and a simple tracker are additionally implemented. In addition, a simple MLP can learn and recognize gestures.
Jupyter Notebook
40
star
25

TPU-Posenet

Edge TPU Accelerator / Multi-TPU / Multi-Model + Posenet/DeeplabV3/MobileNet-SSD + Python + Sync / Async + LaptopPC / RaspberryPi
Python
39
star
26

faster-whisper-env

An environment where you can try out faster-whisper immediately.
Python
36
star
27

facemesh_onnx_tensorrt

Verify that the post-processing merged into FaceMesh works correctly. The object detection model can be anything other than BlazeFace. YOLOv4 and FaceMesh committed to this repository have modified post-processing.
Python
32
star
28

yolact_edge_onnx_tensorrt_myriad

Provides a conversion flow for YOLACT_Edge to models compatible with ONNX, TensorRT, OpenVINO and Myriad (OAK). My own implementation of post-processing allows for e2e inference. Support for Multi-Class NonMaximumSuppression, CombinedNonMaxSuppression.
Python
27
star
29

crowdhuman_hollywoodhead_yolo_convert

YOLOv7 training. Generates a head-only dataset in YOLO format. The labels included in the CrowdHuman dataset are Head and FullBody, but ignore FullBody.
Python
27
star
30

simple_fisheye_calibrator

Simple GUI-based correction of fisheye images. The correction parameters specified on the screen can be diverted to opencv's fisheye correction parameters. Supports execution via Docker.
Python
26
star
31

onnx2json

Exports the ONNX file to a JSON file and JSON dict.
Python
26
star
32

tflite2json2tflite

Convert tflite to JSON and make it editable in the IDE. It also converts the edited JSON back to tflite binary.
Dockerfile
26
star
33

mtomo

Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.
Dockerfile
24
star
34

MobileNetv2-SSDLite

My proprietary procedure. Caffe implementation of SSD and SSDLite detection on MobileNetv2, converted from tensorflow.
Python
24
star
35

Bazel_bin

Bazel's pre-built binaries for armv7l / aarch64 / x86_64.
Shell
22
star
36

BoT-SORT-ONNX-TensorRT

BoT-SORT + YOLOX implemented using only onnxruntime, Numpy and scipy, without cython_bbox and PyTorch. Fast human tracker. OSNet is not used.
Python
21
star
37

pytorch4raspberrypi

Cross-compilation of PyTorch armv7l (32bit) for RaspberryPi OS
Dockerfile
20
star
38

scc4onnx

Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.
Python
20
star
39

onnxruntime4raspberrypi

onnxruntime for RaspberryPi armv7l
19
star
40

zumo32u4

Zumo32u4(ATmega32u4) + RaspberryPi3(RaspberryPi) + SLAM(CartoGrapher/Gmapping) + RPLiDAR A1M8
Python
19
star
41

20220228_intel_deeplearning_day_hitnet_demo

Special Presentation Demo at Intel IoT Planet 2021 DeepLearning Day / インテル IoT プラネット 2021 DeepLearning Dayの特別講演の発表資料 https://www.intel.co.jp/content/www/jp/ja/now/iot-planet/deep-learning-day.html
Python
19
star
42

TensorflowLite-flexdelegate

This is a repository for checking the operation of Flex Delegate of Tensorflow.
C++
19
star
43

sit4onnx

Tools for simple inference testing using TensorRT, CUDA and OpenVINO CPU/GPU and CPU providers. Simple Inference Test for ONNX.
Python
18
star
44

spo4onnx

Simple tool for partial optimization of ONNX. Further optimize some models that cannot be optimized with onnx-optimizer and onnxsim by several tens of percent. In particular, models containing Einsum and OneHot.
Python
18
star
45

hand_landmark

HandLandmark Detection that can be performed only in onnxruntime. Pre-focusing by skeletal detection is not performed. This does not use MediaPipe.
Python
17
star
46

json2onnx

Converts a JSON file to an ONNX file.
Python
16
star
47

snc4onnx

Simple tool to combine(merge) onnx models. Simple Network Combine Tool for ONNX.
Python
15
star
48

sio4onnx

Simple tool to change the INPUT and OUTPUT shape of ONNX.
Python
14
star
49

Open3D-build

Provide Docker build sequences of Open3D for various environments.
Dockerfile
14
star
50

gesture-drone

Drone + OpenVINO + ObjectDetection + FaceRecognition + MobileNetV2 PoseEstiation
Python
14
star
51

OpenVINO-ADAS

[1 FPS / CPU only] OpenVINO+ADAS+LattePandaAlpha. CPU / GPU / NCS. RealTime semantic-segmentaion. Python3.5+OpenCV3.4.3+PIL
Python
14
star
52

OpenVINO-bin

OpenVINO installer storage location (Full version)
Shell
13
star
53

sne4onnx

A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or simply to separate onnx files to any size you want.
Python
13
star
54

jetson-tensorflow-pytorch-build

Provides an environment for compiling TensorFlow or PyTorch with CUDA for aarch64 on an x86 machine. This is for Jetson. If you build using an EC2 m6g.16xlarge (aarch64) instance, TensorFlow can be fully built in about 30 minutes. It can be used as a cross-compilation environment not only for TensorFlow and PyTorch, but also for various other packages and libraries.
Dockerfile
13
star
55

PyTorch-build

Provide Docker build sequences of PyTorch for various environments.
Dockerfile
11
star
56

Face_Mask_Augmentation

Masked Face Image Augmentation Tool for Dataset 300W-LP with 6D Head Pose Information.
Python
11
star
57

sam4onnx

A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for ONNX.
Python
11
star
58

DirectMHP_YOLOv7

I just replaced the DirectMHP backend from YOLOv5 to YOLOv7.
Python
10
star
59

onnx-speech-language-detection

A simple program that returns RFC5646 style language codes and country code symbols from microphone input or wav byte arrays. e.g. ja-JP, en-US, ...
Python
9
star
60

components_of_onnx

[WIP] ONNX parts yard. The various operations described in Operator Schemas are converted in advance into OP stand-alone ONNX files.
Python
8
star
61

tflite-input-output-rewriter

This tool displays tflite signatures and rewrites the input/output OP name to the name of the signature. There is no need to install TensorFlow or TFLite.
Python
8
star
62

rtspserver-ffmpeg

Build an ffmpeg RTSP distribution server using an old alpine:3.8 Docker Image.
Python
8
star
63

human-pose-estimation-3d-python-cpp

Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.
Python
8
star
64

soc4onnx

A very simple tool that forces a change in the opset of an ONNX graph. Simple Opset Changer for ONNX.
Python
7
star
65

sbi4onnx

A very simple script that only initializes the batch size of ONNX. Simple Batchsize Initialization for ONNX.
Python
7
star
66

sog4onnx

Simple ONNX operation generator. Simple Operation Generator for ONNX.
Python
7
star
67

snd4onnx

Simple node deletion tool for onnx.
Python
7
star
68

simple_camera_capture

Very simple recording tool using only OpenCV. Automatically record the camera capture to mp4, press C key or left mouse button click captures the image.
Python
7
star
69

mmaction2-onnx-export-env

ONNX export environment for mmaction2
Dockerfile
7
star
70

RaspberryPi-bin

OS image repository for RaspberryPi3.
Shell
6
star
71

sed4onnx

Simple ONNX constant encoder/decoder. Since the constant values in the JSON files generated by onnx2json are Base64-encoded values, ASCII <-> Base64 conversion is required when rewriting JSON constant values.
Python
6
star
72

TinyYolo

Challenge the marginal performance of YoloV2 + Neural Compute Stick + RaspberryPi YoloV2+Neural Compute Stick(NCS)+Raspberry Piの限界性能に挑戦
Python
5
star
73

PINTO0309

5
star
74

sor4onnx

Simple OP Renamer for ONNX.
Python
5
star
75

OpenCVonARMv7

Deb package for introducing OpenCV to RaspberryPi3.
5
star
76

sna4onnx

Simple node addition tool for onnx. Simple Node Addition for ONNX.
Python
5
star
77

ssc4onnx

Checker with simple ONNX model structure. Simple Structure Checker for ONNX.
Python
5
star
78

realsense-cuda-opengl-docker

RealSense execution environment built on a Docker container on Ubuntu 20.04. NIVIDA GPU and OpenGL capable. CUADA 11.4.
Dockerfile
5
star
79

simple-ros2-processing-tools

A set of simple tools for ROS2 of my own making.
Python
5
star
80

300W_LP_AFLW2000_viewer

Python
4
star
81

soa4onnx

Simple model Output OP Additional tools for ONNX.
Python
4
star
82

mmrotate-exec-env

Execution environment of mmrotate
Dockerfile
3
star
83

tvm-build

TVM build and run test environment
Dockerfile
3
star
84

ssi4onnx

Simple Shape Inference tool for ONNX.
Python
3
star
85

edgetpu-bin

Prebuilt binary for EdgeTPU PythonAPI standalone installer.
3
star
86

NITEC-ONNX-TensorRT

ONNX implementation of "NITEC: Versatile Hand-Annotated Eye Contact Dataset for Ego-Vision Interaction" https://github.com/thohemp/nitec
Python
3
star
87

Human-Face-Crop-ONNX-TensorRT

Simply crop the face from the image at high speed and save.
Python
3
star
88

Maxine-env

NVIDIA Maxine - A playground for running the Audio Effects SDK.
Dockerfile
2
star
89

TBBonARMv7

RaspberryPi3へのTBB(Intel Threading Building Blocks)導入用debパッケージ保管庫
2
star
90

sod4onnx

Simple model Output OP Deletion tools for ONNX.
Python
2
star
91

sic4onnx

A very simple tool that forces a change in the IR Version of an ONNX graph. Simple IR version Changer for ONNX.
Python
2
star
92

rtspserver-v4l2

RTSP distribution server for USB camera video using v4l2 with Docker container on Ubuntu 20.04.
Python
2
star
93

YoloTrainDataGenerate

Procedures and tools for semi-mechanically automatically generating YoloV2 original learning data from video.
Python
2
star
94

rosdepth2mp4

A simple tool to record ROS2 Image topics to MP4.
Python
2
star
95

ZED2-Docker

ZED2 SDK Installed Containers
Dockerfile
2
star
96

sde4onnx

Simple doc_string eraser for ONNX.
Python
1
star
97

DeepLearningMugenKnock

Python
1
star
98

SegNet-TF

Tensorflow implementation of SegNet Tensorflow 1.11.0 + Python (I made minor bugfixes for toimcio/SegNet-tensorflow)
Jupyter Notebook
1
star
99

yolov9-wholebody25-tensorflowjs-web-test

A test environment running yolov9-wholebody25 on TensorFlow.js.
HTML
1
star
100

svs4onnx

A very simple tool to swap connections between output and input variables in an ONNX graph. Simple Variable Switch for ONNX.
Python
1
star