• This repository has been archived on 09/Sep/2022
  • Stars
    star
    355
  • Rank 116,006 (Top 3 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 6 years ago
  • Updated over 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[High Performance / MAX 30 FPS] RaspberryPi3(RaspberryPi/Raspbian Stretch) or Ubuntu + Multi Neural Compute Stick(NCS/NCS2) + RealSense D435(or USB Camera or PiCamera) + MobileNet-SSD(MobileNetSSD) + Background Multi-transparent(Simple multi-class segmentation) + FaceDetection + MultiGraph + MultiProcessing + MultiClustering

MobileNet-SSD-RealSense

RaspberryPi3(Raspbian Stretch) or Ubuntu16.04/UbuntuMate + Neural Compute Stick(NCS/NCS2) + RealSense D435(or USB Camera or PiCamera) + MobileNet-SSD(MobileNetSSD)

【Notice】December 19, 2018 OpenVINO has supported RaspberryPi + NCS2 !!
https://software.intel.com/en-us/articles/OpenVINO-RelNotes#inpage-nav-2-2

【Dec 31, 2018】 USB Camera + MultiStick + MultiProcess mode correspondence with NCS2 is completed.
【Jan 04, 2019】 Tune performance four times. MultiStickSSDwithRealSense_OpenVINO_NCS2.py. Core i7 -> NCS2 x1, 48 FPS
【Nov 12, 2019】 Compatible with OpenVINO 2019 R3 + RaspberryPi3/4 + Raspbian Buster.


Measure the distance to the object with RealSense D435 while performing object detection by MobileNet-SSD(MobileNetSSD) with RaspberryPi 3 boosted with Intel Movidius Neural Compute Stick.
"USB Camera mode / PiCamera mode" can not measure the distance, but it operates at high speed.
And, This is support for MultiGraph and FaceDetection, MultiProcessing, Background transparentation.
And, This is support for simple clustering function. (To prevent thermal runaway)

My blog

【Japanese Article1】
RaspberryPi3 (Raspbian Stretch) + Intel Movidius Neural Compute Stick(NCS) + RealSenseD435 + MobileNet-SSD(MobileNetSSD) で高速に物体検出しつつ悟空やモニタまでの距離を測る

【Japanese / English Article2】
Intel also praised me again ヽ(゚∀゚)ノ Yeah MobileNet-SSD(MobileNetSSD) object detection and RealSense distance measurement (640x480) with RaspberryPi3 At least 25FPS playback frame rate + 12FPS prediction rate

【Japanese / English Article3】
Detection rate approx. 30FPS RaspberryPi3 Model B(plus none) is slightly later than TX2, acquires object detection rate of MobilenetSSD and corresponds to MultiModel VOC+WIDER FACE

【Japanese Article4】
RaspberryPi3で複数のMovidius Neural Compute Stick をシームレスにクラスタ切り替えして高速推論性能を維持しつつ熱暴走(内部温度70℃前後)を回避する

【Japanese Article5】
Caffeで超軽量な "Semantic Segmentation" のモデルを生成する Sparse-Quantized CNN 512x1024_10MB_軽量モデル_その1

【Japanese / English Article6】
Boost RaspberryPi3 with Neural Compute Stick 2 (1 x NCS2) and feel the explosion performance of MobileNet-SSD (If it is Core i7, 21 FPS)

【Japanese / English Article7】
[24 FPS] Boost RaspberryPi3 with four Neural Compute Stick 2 (NCS2) MobileNet-SSD / YoloV3 [48 FPS for Core i7]

【Japanese / English Article8】
[24 FPS, 48 FPS] RaspberryPi3 + Neural Compute Stick 2, The day when the true power of one NCS2 was drawn out and "Goku" became a true "super saiya-jin"


Table of contents

1. Summary
 1.1 Verification environment NCSDK (1)
 1.2 Result of detection rate NCSDK (1)
 1.3 Verification environment NCSDK (2)
 1.4 Result of detection rate NCSDK (2)
2. Performance comparison as a mobile application (Based on sensory comparison)
3. Change history
4. Motion image
 4-1. NCSDK ver
  4-1-1. RealSense Mode about 6.5 FPS (Synchronous screen drawing)
  4-1-2. RealSense Mode about 25.0 FPS (Asynchronous screen drawing)
  4-1-3. USB Camera Mode MultiStick x4 Boosted 16.0 FPS+ (Asynchronous screen drawing)
  4-1-4. RealSense Mode SingleStick about 5.0 FPS(Transparent background / Asynchronous screen drawing
  4-1-5. USB Camera Mode MultiStick x3 Boosted (Asynchronous screen drawing / MultiGraph
  4-1-6. Simple clustering function (MultiStick / MultiCluster / Cluster switch cycle / Cluster switch temperature)
 4-2. OpenVINO ver
  4-2-1. USB Camera Mode NCS2 x 1 Stick + RaspberryPi3(Synchronous screen drawing)
  4-2-2. USB Camera Mode NCS2 x 1 Stick + Core i7(Synchronous screen drawing)
  4-2-3. USB Camera Mode NCS2 x 1 Stick + Core i7(Asynchronous screen drawing)
  4-2-4. USB Camera Mode NCS2 x 1 Stick + RaspberryPi3(Asynchronous screen drawing)
  4-2-5. USB Camera Mode NCS2 x 1 Stick + LattePanda Alpha(Asynchronous screen drawing)48 FPS
  4-2-6. PiCamera Mode NCS2 x 1 Stick + RaspberryPi3(Asynchronous screen drawing)
  4-2-7. USB Camera Mode NCS2 x 1 Stick + RaspberryPi4(Asynchronous screen drawing)40 FPS
5. Motion diagram of MultiStick
6. Environment
7. Firmware update with Windows 10 PC
8. Work with RaspberryPi3 (or PC + Ubuntu16.04 / RaspberryPi + Ubuntu Mate)
 8-1. NCSDK ver (Not compatible with NCS2)
 8-2. OpenVINO ver (Corresponds to NCS2)
9. Execute the program
10. 【Reference】 MobileNetv2 Model (Caffe) Great Thanks!!
11. Conversion method from Caffe model to NCS model (NCSDK)
12. Conversion method from Caffe model to NCS model (OpenVINO)
13. Construction of learning environment and simple test for model (Ubuntu16.04 x86_64 PC + GPU NVIDIA Geforce)
14. Reference articles, thanks

Summary

Performance measurement result each number of sticks. (It is Detection rate. It is not a Playback rate.)
The best performance can be obtained with QVGA + 5 Sticks.
However, It is important to use a good quality USB camera.

Verification environment (1)

No. Item Contents
1 Video device USB Camera (No RealSense D435) ELP-USB8MP02G-L75 $70
2 Auxiliary equipment (Required) self-powered USB2.0 HUB
3 Input resolution 640x480
4 Output resolution 640x480
5 Execution parameters $ python3 MultiStickSSDwithRealSense.py -mod 1 -wd 640 -ht 480

Result of detection rate (1)

No. Stick count FPS Youtube Movie Note
1 1 Stick 6 FPS https://youtu.be/lNbhutT8hkA base line
2 2 Sticks 12 FPS https://youtu.be/zuJOhKWoLwc 6 FPS increase
3 3 Sticks 16.5 FPS https://youtu.be/8UDFIJ1Z4v8 4.5 FPS increase
4 4 Sticks 16.5 FPS https://youtu.be/_2xIZ-IZwZc No improvement

Verification environment (2)

No. Item Contents
1 Video device USB Camera (No RealSense D435) PlayStationEye $5
2 Auxiliary equipment (Required) self-powered USB2.0 HUB
3 Input resolution 320x240
4 Output resolution 320x240
5 Execution parameters $ python3 MultiStickSSDwithRealSense.py -mod 1 -wd 320 -ht 240

Result of detection rate (2)

No. Stick count FPS Youtube Movie Note
1 4 Sticks   25 FPS https://youtu.be/v-Cei1TW88c
2 5 Sticks 30 FPS https://youtu.be/CL6PTNgWibI best performance

Performance comparison as a mobile application (Based on sensory comparison)

◯=HIGH, △=MEDIUM, ×=LOW

No. Model Speed Accuracy Adaptive distance
1 SSD × ALL
2 MobileNet-SSD Short distance
3 YoloV3 × ALL
4 tiny-YoloV3 × Long distance

Change history

Change history
[July 14, 2018] Corresponds to NCSDK v2.05.00.02
[July 17, 2018] Corresponds to OpenCV 3.4.2
[July 21, 2018] Support for multiprocessing [MultiStickSSDwithRealSense.py]
[July 23, 2018] Support for USB Camera Mode [MultiStickSSDwithRealSense.py]
[July 29, 2018] Added steps to build learning environment
[Aug 3, 2018] Background Multi-transparent mode implementation [MultiStickSSDwithRealSense.py]
[Aug 11, 2018] CUDA9.0 + cuDNN7.2 compatible with environment construction procedure
[Aug 14, 2018] Reference of MobileNetv2 Model added to README and added Facedetection Model
[Aug 15, 2018] Bug Fixed. `MultiStickSSDwithRealSense.py` depth_scale be undefined. Pull Requests merged. Thank you Drunkar!!
[Aug 19, 2018] 【Experimental】 Update Facedetection model [DeepFace] (graph.facedetectXX)
[Aug 22, 2018] Separate environment construction procedure of "Raspbian Stretch" and "Ubuntu16.04"
[Aug 22, 2018] 【Experimental】 FaceDetection model replaced [resnet] (graph.facedetection)
[Aug 23, 2018] Added steps to build NCSDKv2
[Aug 25, 2018] Added "Detection FPS View" [MultiStickSSDwithRealSense.py]
[Sep 01, 2018] FaceDetection model replaced [Mobilenet] (graph.fullfacedetection / graph.shortfacedetection)
[Sep 01, 2018] Added support for MultiGraph and FaceDetection mode [MultiStickSSDwithRealSense.py]
[Sep 04, 2018] Performance measurement result with 5 sticks is posted
[Sep 08, 2018] To prevent thermal runaway, simple clustering function of stick was implemented.
[Sep 16, 2018] 【Experimental】 Added Semantic Segmentation model [Tensorflow-UNet] (semanticsegmentation_frozen_person.pb)
[Sep 20, 2018] 【Experimental】 Updated Semantic Segmentation model [Tensorflow-UNet]
[Oct 07, 2018] 【Experimental】 Added Semantic Segmentation model [caffe-jacinto] (cityscapes5_jsegnet21v2_iter_60000.caffemodel)
[Oct 10, 2018] Corresponds to NCSDK 2.08.01
[Oct 12, 2018] 【Experimental】 Added Semantic Segmentation model [Tensorflow-ENet] (semanticsegmentation_enet.pb) https://github.com/PINTO0309/TensorFlow-ENet.git
[Dec 22, 2018] Only "USB Camera + single thread mode" correspondence with NCS 2 is completed
[Dec 31, 2018] "USB Camera + MultiStick + MultiProcess mode" correspondence with NCS2 is completed
[Jan 04, 2019] Tune performance four times. MultiStickSSDwithRealSense_OpenVINO_NCS2.py
[Feb 01, 2019] Pull request merged. Fix Typo. Thanks, nguyen-alexa!!
[Feb 09, 2019] Corresponds to PiCamera.
[Feb 10, 2019] Added support for SingleStickSSDwithRealSense_OpenVINO_NCS2.py
[Feb 10, 2019] Firmware v5.9.13 -> v5.10.6, RealSenseSDK v2.13.0 -> v2.16.5
[May 01, 2019] Corresponds to OpenVINO 2019 R1.0.1
[Nov 12, 2019] Corresponds to OpenVINO 2019 R3.0


Motion image

RealSense Mode about 6.5 FPS (Detection + Synchronous screen drawing / SingleStickSSDwithRealSense.py)

【YouTube Movie】 https://youtu.be/77cV9fyqJ1w

03 04

RealSense Mode about 25.0 FPS (Asynchronous screen drawing / MultiStickSSDwithRealSense.py)

However, the prediction rate is fairly low.(about 6.5 FPS)
【YouTube Movie】 https://youtu.be/tAf1u9DKkh4

09

USB Camera Mode MultiStick x4 Boosted 16.0 FPS+ (Asynchronous screen drawing / MultiStickSSDwithRealSense.py)

【YouTube Movie】 https://youtu.be/GedDpAc0JyQ

10 11

RealSense Mode SingleStick about 5.0 FPS(Transparent background in real time / Asynchronous screen drawing / MultiStickSSDwithRealSense.py)

【YouTube Movie】 https://youtu.be/ApyX-mN_dYA

12

USB Camera Mode MultiStick x3 Boosted (Asynchronous screen drawing / MultiGraph(SSD+FaceDetection) / FaceDetection / MultiStickSSDwithRealSense.py)

【YouTube Movie】 https://youtu.be/fQZpuD8mWok

13

Simple clustering function (MultiStick / MultiCluster / Cluster switch cycle / Cluster switch temperature)

14
[Execution log]
15

USB Camera Mode NCS2 SingleStick + RaspberryPi3(Synchronous screen drawing / SingleStickSSDwithUSBCamera_OpenVINO_NCS2.py)

【YouTube Movie】 https://youtu.be/GJNkX-ZBuC8

16

USB Camera Mode NCS2 SingleStick + Core i7(Synchronous screen drawing / SingleStickSSDwithUSBCamera_OpenVINO_NCS2.py)

【YouTube Movie】 https://youtu.be/1ogge90EuqI

17

USB Camera Mode NCS2 x 1 Stick + Core i7(Asynchronous screen drawing / MultiStickSSDwithRealSense_OpenVINO_NCS2.py)

【YouTube Movie】 https://youtu.be/Nx_rVDgT8uY

$ python3 MultiStickSSDwithRealSense_OpenVINO_NCS2.py -mod 1 -numncs 1

23

USB Camera Mode NCS2 x 1 Stick + RaspberryPi3(Asynchronous screen drawing / MultiStickSSDwithRealSense_OpenVINO_NCS2.py)

【YouTube Movie】 https://youtu.be/Xj2rw_5GwlI

$ python3 MultiStickSSDwithRealSense_OpenVINO_NCS2.py -mod 1 -numncs 1

24

USB Camera Mode NCS2 x 1 Stick + LattePanda Alpha(Asynchronous screen drawing / MultiStickSSDwithRealSense_OpenVINO_NCS2.py)[48 FPS]

https://twitter.com/PINTO03091/status/1081575747314057219

PiCamera Mode NCS2 x 1 Stick + RaspberryPi3(Asynchronous screen drawing / MultiStickSSDwithPiCamera_OpenVINO_NCS2.py)

$ python3 MultiStickSSDwithPiCamera_OpenVINO_NCS2.py

25

USB Camera Mode NCS2 x 1 Stick + RaspberryPi4(Asynchronous screen drawing / MultiStickSSDwithUSBCamera_OpenVINO_NCS2.py)

$ python3 MultiStickSSDwithUSBCamera_OpenVINO_NCS2.py

26


Motion diagram of MultiStick

20

Environment

1.RaspberryPi3 + Raspbian Stretch (USB2.0 Port) or RaspberryPi3 + Ubuntu Mate or PC + Ubuntu16.04
2.Intel RealSense D435 (Firmware Ver 5.10.6) or USB Camera or PiCamera Official stable version firmware
3.Intel Neural Compute Stick v1/v2 x1piece or more
4-1.OpenCV 3.4.2 (NCSDK)
4-2.OpenCV 4.1.1-openvino (OpenVINO)
5.VFPV3 or TBB (Intel Threading Building Blocks)
6.Numpy
7.Python3.5
8.NCSDK v2.08.01 (It does not work with NCSDK v1. v1 version is here)
9. OpenVINO 2019 R2.0.1
10.RealSenseSDK v2.16.5 (The latest version is unstable) Official stable version SDK
11.HDMI Display

Firmware update with Windows 10 PC

1.ZIP 2 types (1) Firmware update tool for Windows 10 (2) The latest firmware bin file Download and decompress
2.Copy Signed_Image_UVC_5_10_6_0.bin to the same folder as intel-realsense-dfu.exe
3.Connect RealSense D435 to USB port
4.Wait for completion of installation of device driver
5.Execute intel-realsense-dfu.exe
6.「1」 Type and press Enter and follow the instructions on the screen to update
01
7.Firmware version check 「2」
02

Work with RaspberryPi3 (or PC + Ubuntu16.04 / RaspberryPi + Ubuntu Mate)

1.NCSDK ver (Not compatible with NCS2)

Use of Virtualbox is not strongly recommended
[Note] Japanese Article
https://qiita.com/akitooo/items/6aee8c68cefd46d2a5dc
https://qiita.com/kikuchi_kentaro/items/280ac68ad24759b4c091

[Post of Official Forum]
https://ncsforum.movidius.com/discussion/950/problems-with-python-multiprocessing-using-sdk-2-0-0-4
https://ncsforum.movidius.com/discussion/comment/3921
https://ncsforum.movidius.com/discussion/comment/4316/#Comment_4316

1.Execute the following

$ sudo apt update;sudo apt upgrade
$ sudo reboot

2.Extend the SWAP area (RaspberryPi+Raspbian Stretch / RaspberryPi+Ubuntu Mate Only)

$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=2048

$ sudo /etc/init.d/dphys-swapfile restart;swapon -s

3.Install NSCDK

$ sudo apt install python-pip python3-pip
$ sudo pip3 install --upgrade pip
$ sudo pip2 install --upgrade pip

$ cd ~/ncsdk
$ make uninstall
$ cd ~;rm -r -f ncsdk
#=====================================================================================================
# [Oct 10, 2018] NCSDK 2.08.01 , Tensorflow 1.9.0
$ git clone -b ncsdk2 http://github.com/Movidius/ncsdk
#=====================================================================================================
$ cd ncsdk
$ nano ncsdk.conf

#MAKE_NJOBS=1
↓
MAKE_NJOBS=1

$ sudo apt install cython
$ sudo -H pip3 install cython
$ sudo -H pip3 install numpy
$ sudo -H pip3 install pillow
$ make install

$ cd ~
$ wget https://github.com/google/protobuf/releases/download/v3.5.1/protobuf-all-3.5.1.tar.gz
$ tar -zxvf protobuf-all-3.5.1.tar.gz
$ cd protobuf-3.5.1
$ ./configure
$ sudo make -j1
$ sudo make install
$ cd python
$ export LD_LIBRARY_PATH=../src/.libs
$ python3 setup.py build --cpp_implementation 
$ python3 setup.py test --cpp_implementation
$ sudo python3 setup.py install --cpp_implementation
$ sudo ldconfig
$ protoc --version

# Before executing "make examples", insert Neural Compute Stick into the USB port of the device.
$ cd ~/ncsdk
$ make examples -j1

【Reference】https://github.com/movidius/ncsdk

4.Update udev rule

$ sudo apt install -y git libssl-dev libusb-1.0-0-dev pkg-config libgtk-3-dev
$ sudo apt install -y libglfw3-dev libgl1-mesa-dev libglu1-mesa-dev

$ cd /etc/udev/rules.d/
$ sudo wget https://raw.githubusercontent.com/IntelRealSense/librealsense/master/config/99-realsense-libusb.rules
$ sudo udevadm control --reload-rules && udevadm trigger

5.Upgrade to "cmake 3.11.4"

$ cd ~
$ wget https://cmake.org/files/v3.11/cmake-3.11.4.tar.gz
$ tar -zxvf cmake-3.11.4.tar.gz;rm cmake-3.11.4.tar.gz
$ cd cmake-3.11.4
$ ./configure --prefix=/home/pi/cmake-3.11.4
$ make -j1
$ sudo make install
$ export PATH=/home/pi/cmake-3.11.4/bin:$PATH
$ source ~/.bashrc
$ cmake --version
cmake version 3.11.4

6.Register LD_LIBRARY_PATH

$ nano ~/.bashrc
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH

$ source ~/.bashrc

7.Install TBB (Intel Threading Building Blocks)

$ cd ~
$ wget https://github.com/PINTO0309/TBBonARMv7/raw/master/libtbb-dev_2018U2_armhf.deb
$ sudo dpkg -i ~/libtbb-dev_2018U2_armhf.deb
$ sudo ldconfig

8.Uninstall old OpenCV (RaspberryPi Only)
[Very Important] The highest performance can not be obtained unless VFPV3 is enabled.

$ cd ~/opencv-3.x.x/build
$ sudo make uninstall
$ cd ~
$ rm -r -f opencv-3.x.x
$ rm -r -f opencv_contrib-3.x.x

9.Build install "OpenCV 3.4.2" or Install by deb package.
[Very Important] The highest performance can not be obtained unless VFPV3 is enabled.

9.1 Build Install (RaspberryPi Only)

$ sudo apt update && sudo apt upgrade
$ sudo apt install build-essential cmake pkg-config libjpeg-dev libtiff5-dev \
libjasper-dev libavcodec-dev libavformat-dev libswscale-dev \
libv4l-dev libxvidcore-dev libx264-dev libgtk2.0-dev libgtk-3-dev \
libcanberra-gtk* libatlas-base-dev gfortran python2.7-dev python3-dev

$ cd ~
$ wget -O opencv.zip https://github.com/Itseez/opencv/archive/3.4.2.zip
$ unzip opencv.zip;rm opencv.zip
$ wget -O opencv_contrib.zip https://github.com/Itseez/opencv_contrib/archive/3.4.2.zip
$ unzip opencv_contrib.zip;rm opencv_contrib.zip
$ cd ~/opencv-3.4.2/;mkdir build;cd build
$ cmake -D CMAKE_CXX_FLAGS="-DTBB_USE_GCC_BUILTINS=1 -D__TBB_64BIT_ATOMICS=0" \
        -D CMAKE_BUILD_TYPE=RELEASE \
        -D CMAKE_INSTALL_PREFIX=/usr/local \
        -D INSTALL_PYTHON_EXAMPLES=OFF \
        -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.4.2/modules \
        -D BUILD_EXAMPLES=OFF \
        -D PYTHON_DEFAULT_EXECUTABLE=$(which python3) \
        -D INSTALL_PYTHON_EXAMPLES=OFF \
        -D BUILD_opencv_python2=ON \
        -D BUILD_opencv_python3=ON \
        -D WITH_OPENCL=OFF \
        -D WITH_OPENGL=ON \
        -D WITH_TBB=ON \
        -D BUILD_TBB=OFF \
        -D WITH_CUDA=OFF \
        -D ENABLE_NEON:BOOL=ON \
        -D ENABLE_VFPV3=ON \
        -D WITH_QT=OFF \
        -D BUILD_TESTS=OFF ..
$ make -j1
$ sudo make install
$ sudo ldconfig

9.2 Install by deb package (RaspberryPi Only) [I already activated VFPV3 and built it]

$ cd ~
$ sudo apt autoremove libopencv3
$ wget https://github.com/PINTO0309/OpenCVonARMv7/raw/master/libopencv3_3.4.2-20180709.1_armhf.deb
$ sudo apt install -y ./libopencv3_3.4.2-20180709.1_armhf.deb
$ sudo ldconfig

10.Install Intel® RealSense™ SDK 2.0

$ cd ~
$ sudo apt update;sudo apt upgrade
$ sudo apt install -y vulkan-utils libvulkan1 libvulkan-dev

# Ubuntu16.04 Only
$ sudo apt install -y mesa-utils* libglu1* libgles2-mesa-dev libopenal-dev gtk+-3.0

# The latest version is unstable
$ cd ~/librealsense/build
$ sudo make uninstall
$ cd ~
$ sudo rm -rf librealsense

$ git clone -b v2.16.5 https://github.com/IntelRealSense/librealsense.git
$ cd ~/librealsense
$ git checkout -b v2.16.5
$ mkdir build;cd build

$ cmake .. -DBUILD_EXAMPLES=true -DCMAKE_BUILD_TYPE=Release

# For RaspberryPi3
$ make -j1
or
# For LaptopPC
$ make -j8

$ sudo make install

11.Install Python binding

$ cd ~/librealsense/build

#When using with Python 3.x series
$ cmake .. -DBUILD_PYTHON_BINDINGS=bool:true -DPYTHON_EXECUTABLE=$(which python3)

OR

#When using with Python 2.x series
$ cmake .. -DBUILD_PYTHON_BINDINGS=bool:true -DPYTHON_EXECUTABLE=$(which python)

# For RaspberryPi3
$ make -j1
or
# For LaptopPC
$ make -j8

$ sudo make install

12.Update PYTHON_PATH

$ nano ~/.bashrc
export PYTHONPATH=$PYTHONPATH:/usr/local/lib

$ source ~/.bashrc

13.RealSense SDK import test

$ python3
Python 3.5.3 (default, Jan 19 2017, 14:11:04) 
[GCC 6.3.0 20170124] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyrealsense2
>>> exit()

14.Installing the OpenGL package for Python

$ sudo apt-get install -y python-opengl
$ sudo -H pip3 install pyopengl
$ sudo -H pip3 install pyopengl_accelerate

15.Installation of the imutils package. (For PiCamera)

$ sudo apt-get install -y python3-picamera
$ sudo -H pip3 install imutils --upgrade

16.Reduce the SWAP area to the default size (RaspberryPi+Raspbian Stretch / RaspberryPi+Ubuntu Mate Only)

$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=100

$ sudo /etc/init.d/dphys-swapfile restart;swapon -s

17.Clone a set of resources

$ git clone https://github.com/PINTO0309/MobileNet-SSD-RealSense.git

18.[Optional] Create a RAM disk folder for movie file placement

$ cd /etc
$ sudo cp fstab fstab_org
$ sudo nano fstab

# Mount "/home/pi/movie" on RAM disk.
# Add below.
tmpfs /home/pi/movie tmpfs defaults,size=32m,noatime,mode=0777 0 0

$ sudo reboot


2.OpenVINO ver (Corresponds to NCS2)

1.Execute the following

$ sudo apt update;sudo apt upgrade
$ sudo reboot

2.Extend the SWAP area (RaspberryPi+Raspbian Stretch / RaspberryPi+Ubuntu Mate Only)

$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=2048

$ sudo /etc/init.d/dphys-swapfile restart;swapon -s

3.Install OpenVINO

$ curl -sc /tmp/cookie "https://drive.google.com/uc?export=download&id=1rBl_3kU4gsx-x2NG2I5uIhvA3fPqm8uE" > /dev/null
$ CODE="$(awk '/_warning_/ {print $NF}' /tmp/cookie)"
$ curl -Lb /tmp/cookie "https://drive.google.com/uc?export=download&confirm=${CODE}&id=1rBl_3kU4gsx-x2NG2I5uIhvA3fPqm8uE" -o l_openvino_toolkit_ie_p_2018.5.445.tgz
$ tar -zxvf l_openvino_toolkit_ie_p_2018.5.445.tgz
$ rm l_openvino_toolkit_ie_p_2018.5.445.tgz
$ sed -i "s|<INSTALLDIR>|$(pwd)/inference_engine_vpu_arm|" inference_engine_vpu_arm/bin/setupvars.sh
$ nano ~/.bashrc
### Add 1 row below
source /home/pi/inference_engine_vpu_arm/bin/setupvars.sh

$ source ~/.bashrc
### Successful if displayed as below
[setupvars.sh] OpenVINO environment initialized

$ sudo usermod -a -G users "$(whoami)"
$ sudo reboot

$ uname -a
Linux raspberrypi 4.14.79-v7+ #1159 SMP Sun Nov 4 17:50:20 GMT 2018 armv7l GNU/Linux

$ sh inference_engine_vpu_arm/install_dependencies/install_NCS_udev_rules.sh
### It is displayed as follows
Update udev rules so that the toolkit can communicate with your neural compute stick
[install_NCS_udev_rules.sh] udev rules installed

4.Update udev rule

$ sudo apt install -y git libssl-dev libusb-1.0-0-dev pkg-config libgtk-3-dev
$ sudo apt install -y libglfw3-dev libgl1-mesa-dev libglu1-mesa-dev

$ cd /etc/udev/rules.d/
$ sudo wget https://raw.githubusercontent.com/IntelRealSense/librealsense/master/config/99-realsense-libusb.rules
$ sudo udevadm control --reload-rules && udevadm trigger

5.Upgrade to "cmake 3.11.4"

$ cd ~
$ wget https://cmake.org/files/v3.11/cmake-3.11.4.tar.gz
$ tar -zxvf cmake-3.11.4.tar.gz;rm cmake-3.11.4.tar.gz
$ cd cmake-3.11.4
$ ./configure --prefix=/home/pi/cmake-3.11.4
$ make -j1
$ sudo make install
$ export PATH=/home/pi/cmake-3.11.4/bin:$PATH
$ source ~/.bashrc
$ cmake --version
cmake version 3.11.4

6.Register LD_LIBRARY_PATH

$ nano ~/.bashrc
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH

$ source ~/.bashrc

7.Install Intel® RealSense™ SDK 2.0

$ cd ~
$ sudo apt update;sudo apt upgrade
$ sudo apt install -y vulkan-utils libvulkan1 libvulkan-dev

# Ubuntu16.04 Only
$ sudo apt install -y mesa-utils* libglu1* libgles2-mesa-dev libopenal-dev gtk+-3.0

# The latest version is unstable
$ cd ~/librealsense/build
$ sudo make uninstall
$ cd ~
$ sudo rm -rf librealsense

$ git clone -b v2.16.5 https://github.com/IntelRealSense/librealsense.git
$ cd ~/librealsense
$ git checkout -b v2.16.5
$ mkdir build;cd build

$ cmake .. -DBUILD_EXAMPLES=false -DCMAKE_BUILD_TYPE=Release

# For RaspberryPi3
$ make -j1
or
# For LaptopPC
$ make -j8

$ sudo make install

8.Install Python binding

$ cd ~/librealsense/build

#When using with Python 3.x series
$ cmake .. -DBUILD_PYTHON_BINDINGS=bool:true -DPYTHON_EXECUTABLE=$(which python3)

OR

#When using with Python 2.x series
$ cmake .. -DBUILD_PYTHON_BINDINGS=bool:true -DPYTHON_EXECUTABLE=$(which python)

# For RaspberryPi3
$ make -j1
or
# For LaptopPC
$ make -j8

$ sudo make install

9.Update PYTHON_PATH

$ nano ~/.bashrc
export PYTHONPATH=$PYTHONPATH:/usr/local/lib

$ source ~/.bashrc

10.RealSense SDK import test

$ python3
Python 3.5.3 (default, Jan 19 2017, 14:11:04) 
[GCC 6.3.0 20170124] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyrealsense2
>>> exit()

11.Installing the OpenGL package for Python

$ sudo apt-get install -y python-opengl
$ sudo -H pip3 install pyopengl
$ sudo -H pip3 install pyopengl_accelerate

12.Installation of the imutils package. (For PiCamera)

$ sudo apt-get install -y python3-picamera
$ sudo -H pip3 install imutils --upgrade

13.Reduce the SWAP area to the default size (RaspberryPi+Raspbian Stretch / RaspberryPi+Ubuntu Mate Only)

$ sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=100

$ sudo /etc/init.d/dphys-swapfile restart;swapon -s

14.Clone a set of resources

$ git clone https://github.com/PINTO0309/MobileNet-SSD-RealSense.git

15.[Optional] Create a RAM disk folder for movie file placement

$ cd /etc
$ sudo cp fstab fstab_org
$ sudo nano fstab

# Mount "/home/pi/movie" on RAM disk.
# Add below.
tmpfs /home/pi/movie tmpfs defaults,size=32m,noatime,mode=0777 0 0

$ sudo reboot


Execute the program

$ python3 MultiStickSSDwithRealSense.py <option1> <option2> ...

<options>
 -grp MVNC graphs Path. (Default=./)
 -mod Camera Mode. (0:=RealSense Mode, 1:=USB Camera Mode. Defalut=0)
 -wd Width of the frames in the video stream. (USB Camera Mode Only. Default=320)
 -ht Height of the frames in the video stream. (USB Camera Mode Only. Default=240)
 -tp TransparentMode. (RealSense Mode Only. 0:=No background transparent, 1:=Background transparent. Default=0)
 -sd SSDDetectionMode. (0:=Disabled, 1:=Enabled. Default=1)
 -fd FaceDetectionMode. (0:=Disabled, 1:=Enabled. Default=0)
 -snc stick_num_of_cluster. Number of sticks to be clustered. (0:=Clustering invalid, n:=Number of sticks Default=0)
 -csc cluster_switch_cycle. Cycle of switching active cluster. (n:=millisecond Default=10000)
 -cst cluster_switch_temperature. Temperature threshold to switch active cluster. (n.n:=temperature(Celsius) Default=65.0)

(Example0) MobileNet-SSD + Neural Compute Stick + RealSense D435 Mode + Syncronous

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"
$ cd ~/MobileNet-SSD-RealSense
$ python3 SingleStickSSDwithRealSense.py

(Example1) MobileNet-SSD + Neural Compute Stick + RealSense D435 Mode + Asynchronous

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"
$ cd ~/MobileNet-SSD-RealSense
$ python3 MultiStickSSDwithRealSense.py

(Example2) MobileNet-SSD + Neural Compute Stick + USB Camera Mode + Asynchronous

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"
$ cd ~/MobileNet-SSD-RealSense
$ python3 MultiStickSSDwithRealSense.py -mod 1 -wd 640 -ht 480
$ python3 MultiStickSSDwithRealSense.py -mod 1 -wd 320 -ht 240

(Example3) MobileNet-SSD + Neural Compute Stick + RealSense D435 Mode + Asynchronous + Transparent background in real time

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"
$ cd ~/MobileNet-SSD-RealSense
$ python3 MultiStickSSDwithRealSense.py -tp 1

(Example4) MobileNet-SSD + FaceDetection + Neural Compute Stick + USB Camera Mode + Asynchronous

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"
$ cd ~/MobileNet-SSD-RealSense
$ python3 MultiStickSSDwithRealSense.py -mod 1 -wd 640 -ht 480 -fd 1

(Example5) To prevent thermal runaway, simple clustering function (2 Stick = 1 Cluster)

When a certain cycle or constant temperature is reached, the active cluster switches seamlessly automatically.
You must turn on the clustering enable flag.
The default switch period is 10 seconds, the default temperature threshold is 65°C.
The number, cycle, and temperature of sticks constituting one cluster can be specified by the start parameter.
Depending on your environment, please tune to the optimum parameters yourself.

[1] Number of all sticks = 5
[2] stick_num_of_cluster = 2
[3] cluster_switch_cycle = 10sec (10,000millisec)
[4] cluster_switch_temperature = 65.0℃

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"

$ cd ~/MobileNet-SSD-RealSense
$ python3 MultiStickSSDwithRealSense.py -mod 1 -snc 2 -csc 10000 -cst 65.0

[Simplified drawing of cluster switching]
14
[Execution log]
15

(Example6)

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G2 GL (Fake KMS)"
$ realsense-viewer

05

(Example7)

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"

$ cd ~/librealsense/wrappers/opencv/build/grabcuts
$ rs-grabcuts

06

(Example8)

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"

$ cd ~/librealsense/wrappers/opencv/build/imshow
$ rs-imshow

07

(Example9) MobileNet-SSD(OpenCV-DNN) + RealSense D435 + Without Neural Compute Stick

$ sudo raspi-config
"7.Advanced Options" - "A7 GL Driver" - "G3 Legacy"

$ cd ~/librealsense/wrappers/opencv/build/dnn
$ rs-dnn

08

【Reference】 MobileNetv2 Model (Caffe) Great Thanks!!

https://github.com/xufeifeiWHU/Mobilenet-v2-on-Movidius-stick.git

Conversion method from Caffe model to NCS model - NCSDK

$ cd ~/MobileNet-SSD-RealSense
$ mvNCCompile ./caffemodel/MobileNetSSD/deploy.prototxt -w ./caffemodel/MobileNetSSD/MobileNetSSD_deploy.caffemodel -s 12
$ mvNCCompile ./caffemodel/Facedetection/fullface_deploy.prototxt -w ./caffemodel/Facedetection/fullfacedetection.caffemodel -s 12
$ mvNCCompile ./caffemodel/Facedetection/shortface_deploy.prototxt -w ./caffemodel/Facedetection/shortfacedetection.caffemodel -s 12

Conversion method from Caffe model to NCS model - OpenVINO

$ cd ~/MobileNet-SSD-RealSense
$ sudo python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo.py \
--input_model caffemodel/MobileNetSSD/MobileNetSSD_deploy.caffemodel \
--input_proto caffemodel/MobileNetSSD/MobileNetSSD_deploy.prototxt \
--data_type FP16 \
--batch 1

or

$ cd ~/MobileNet-SSD-RealSense
$ sudo python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo.py \
--input_model caffemodel/MobileNetSSD/MobileNetSSD_deploy.caffemodel \
--input_proto caffemodel/MobileNetSSD/MobileNetSSD_deploy.prototxt \
--data_type FP32 \
--batch 1

Construction of learning environment and simple test for model (Ubuntu16.04 x86_64 PC + GPU[NVIDIA Geforce])

1.【Example】 Introduction of NVIDIA-Driver, CUDA and cuDNN to the environment with GPU

$ sudo apt-get remove nvidia-*
$ sudo apt-get remove cuda-*

$ apt search "^nvidia-[0-9]{3}$"
$ sudo apt install cuda-9.0
$ sudo reboot
$ nvidia-smi

### Download cuDNN v7.2.1 NVIDIA Home Page
### libcudnn7_7.2.1.38-1+cuda9.0_amd64.deb
### libcudnn7-dev_7.2.1.38-1+cuda9.0_amd64.deb
### cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64.deb
### cuda-repo-ubuntu1604-9-0-local-cublas-performance-update_1.0-1_amd64.deb
### cuda-repo-ubuntu1604-9-0-local-cublas-performance-update-2_1.0-1_amd64.deb
### cuda-repo-ubuntu1604-9-0-local-cublas-performance-update-3_1.0-1_amd64.deb
### cuda-repo-ubuntu1604-9-0-176-local-patch-4_1.0-1_amd64.deb

$ sudo dpkg -i libcudnn7*
$ sudo dpkg -i cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64.deb
$ sudo apt-key add /var/cuda-repo-9-0-local/7fa2af80.pub
$ sudo apt update
$ sudo dpkg -i cuda-repo-ubuntu1604-9*
$ sudo apt update
$ rm libcudnn7_7.2.1.38-1+cuda9.0_amd64.deb;rm libcudnn7-dev_7.2.1.38-1+cuda9.0_amd64.deb;rm cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64.deb;rm cuda-repo-ubuntu1604-9-0-local-cublas-performance-update_1.0-1_amd64.deb;rm cuda-repo-ubuntu1604-9-0-local-cublas-performance-update-2_1.0-1_amd64.deb;rm cuda-repo-ubuntu1604-9-0-local-cublas-performance-update-3_1.0-1_amd64.deb;rm cuda-repo-ubuntu1604-9-0-176-local-patch-4_1.0-1_amd64.deb

$ echo 'export PATH=/usr/local/cuda-9.0/bin:${PATH}' >> ~/.bashrc
$ echo 'export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64:${LD_LIBRARY_PATH}' >> ~/.bashrc
$ source ~/.bashrc
$ sudo ldconfig
$ nvcc -V
$ cd ~;nano cudnn_version.cpp

#include <cudnn.h>
#include <iostream>

int main(int argc, char** argv) {
    std::cout << "CUDNN_VERSION: " << CUDNN_VERSION << std::endl;
    return 0;
}

$ nvcc cudnn_version.cpp -o cudnn_version
$ ./cudnn_version

$ sudo pip2 uninstall tensorflow-gpu
$ sudo pip2 install tensorflow-gpu==1.10.0
$ sudo pip3 uninstall tensorflow-gpu
$ sudo pip3 install tensorflow-gpu==1.10.0

2.【Example】 Introduction of Caffe to environment with GPU

$ cd ~
$ sudo apt install libopenblas-base libopenblas-dev
$ git clone https://github.com/weiliu89/caffe.git
$ cd caffe
$ git checkout ssd
$ cp Makefile.config.example Makefile.config
$ nano Makefile.config
# cuDNN acceleration switch (uncomment to build with cuDNN).
#USE_CUDNN := 1
↓
# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1

# Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3
↓
# Uncomment if you're using OpenCV 3
OPENCV_VERSION := 3

# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
↓
# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda-9.0

# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the lines after *_35 for compatibility.
CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
             -gencode arch=compute_20,code=sm_21 \
             -gencode arch=compute_30,code=sm_30 \
             -gencode arch=compute_35,code=sm_35 \
             -gencode arch=compute_50,code=sm_50 \
             -gencode arch=compute_52,code=sm_52 \
             -gencode arch=compute_61,code=sm_61
↓
# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the lines after *_35 for compatibility.
CUDA_ARCH := -gencode arch=compute_30,code=sm_30 \
             -gencode arch=compute_35,code=sm_35 \
             -gencode arch=compute_50,code=sm_50 \
             -gencode arch=compute_52,code=sm_52 \
             -gencode arch=compute_61,code=sm_61

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
		/usr/lib/python2.7/dist-packages/numpy/core/include
↓
# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
		/usr/lib/python2.7/dist-packages/numpy/core/include \
                /usr/local/lib/python2.7/dist-packages/numpy/core/include


# Uncomment to support layers written in Python (will link against Python libs)
# WITH_PYTHON_LAYER := 1
↓
# Uncomment to support layers written in Python (will link against Python libs)
WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
↓
# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include \
                /usr/include/hdf5/serial
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib \
                /usr/lib/x86_64-linux-gnu/hdf5/serial

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1
↓
# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
USE_PKG_CONFIG := 1
$ rm -r -f build
$ rm -r -f .build_release
$ make superclean
$ make all -j4
$ make test -j4
$ make distribute -j4
$ export PYTHONPATH=/home/<username>/caffe/python:$PYTHONPATH
$ make py

3.Download of VGG model [My Example CAFFE_ROOT PATH = "/home/<username>/caffe"]

$ export CAFFE_ROOT=/home/<username>/caffe
$ cd $CAFFE_ROOT/models/VGGNet
$ wget http://cs.unc.edu/~wliu/projects/ParseNet/VGG_ILSVRC_16_layers_fc_reduced.caffemodel

4.Download VOC 2007 and VOC 2012 datasets

# Download the data.
$ cd ~;mkdir data;cd data
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar #<--- 1.86GB
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar #<--- 438MB
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar #<--- 430MB

# Extract the data.
$ tar -xvf VOCtrainval_11-May-2012.tar
$ tar -xvf VOCtrainval_06-Nov-2007.tar
$ tar -xvf VOCtest_06-Nov-2007.tar
$ rm VOCtrainval_11-May-2012.tar;rm VOCtrainval_06-Nov-2007.tar;rm VOCtest_06-Nov-2007.tar

5.Generate lmdb file

$ export CAFFE_ROOT=/home/<username>/caffe
$ cd $CAFFE_ROOT
# Create the trainval.txt, test.txt, and test_name_size.txt in $CAFFE_ROOT/data/VOC0712/
$ ./data/VOC0712/create_list.sh

# You can modify the parameters in create_data.sh if needed.
# It will create lmdb files for trainval and test with encoded original image:
#   - $HOME/data/VOCdevkit/VOC0712/lmdb/VOC0712_trainval_lmdb
#   - $HOME/data/VOCdevkit/VOC0712/lmdb/VOC0712_test_lmdb
# and make soft links at examples/VOC0712/

$ ./data/VOC0712/create_data.sh

6.Execution of learning [My Example environment GPU x1, GeForce GT 650M = RAM:2GB]

Adjust according to the number of GPU

# It will create model definition files and save snapshot models in:
#   - $CAFFE_ROOT/models/VGGNet/VOC0712/SSD_300x300/
# and job file, log file, and the python script in:
#   - $CAFFE_ROOT/jobs/VGGNet/VOC0712/SSD_300x300/
# and save temporary evaluation results in:
#   - $HOME/data/VOCdevkit/results/VOC2007/SSD_300x300/
# It should reach 77.* mAP at 120k iterations.

$ export CAFFE_ROOT=/home/<username>/caffe
$ export PYTHONPATH=/home/<username>/caffe/python:$PYTHONPATH
$ cd $CAFFE_ROOT
$ cp examples/ssd/ssd_pascal.py examples/ssd/BK_ssd_pascal.py
$ nano examples/ssd/ssd_pascal.py
# Solver parameters.
# Defining which GPUs to use.
gpus = "0,1,2,3"
↓
# Solver parameters.
# Defining which GPUs to use.
gpus = "0"

Adjust according to GPU performance (Memory Size) [My Example GeForce GT 650M x1 = RAM:2GB]

# Divide the mini-batch to different GPUs.
batch_size = 32
accum_batch_size = 32
↓
# Divide the mini-batch to different GPUs.
batch_size = 1
accum_batch_size = 1

Execution

  • The learned data is generated in "$CAFFE_ROOT/models/VGGNet/VOC0712/SSD_300x300"
  • VGG_VOC0712_SSD_300x300_iter_n.caffemodel
  • VGG_VOC0712_SSD_300x300_iter_n.solverstate
$ export CAFFE_ROOT=/home/<username>/caffe
$ export PYTHONPATH=/home/<username>/caffe/python:$PYTHONPATH
$ cd $CAFFE_ROOT
$ python examples/ssd/ssd_pascal.py

7.Evaluation of learning data (still image)

$ export CAFFE_ROOT=/home/<username>/caffe
$ export PYTHONPATH=/home/<username>/caffe/python:$PYTHONPATH
$ cd $CAFFE_ROOT
# If you would like to test a model you trained, you can do:
$ python examples/ssd/score_ssd_pascal.py

8.Evaluation of learning data (USB camera)

$ export CAFFE_ROOT=/home/<username>/caffe
$ export PYTHONPATH=/home/<username>/caffe/python:$PYTHONPATH
$ cd $CAFFE_ROOT
# If you would like to attach a webcam to a model you trained, you can do:
$ python examples/ssd/ssd_pascal_webcam.py

Reference articles, thanks

https://github.com/movidius/ncappzoo/tree/master/caffe/SSD_MobileNet
https://github.com/FreeApe/VGG-or-MobileNet-SSD
https://github.com/chuanqi305/MobileNet-SSD
https://github.com/avBuffer/MobilenetSSD_caffe
https://github.com/Coldmooon/SSD-on-Custom-Dataset
https://github.com/BVLC/caffe/wiki/Ubuntu-16.04-or-15.10-Installation-Guide#the-gpu-support-prerequisites
https://stackoverflow.com/questions/33962226/common-causes-of-nans-during-training
https://github.com/CongWeilin/mtcnn-caffe
https://github.com/DuinoDu/mtcnn.git
https://www.hackster.io/mjrobot/real-time-face-recognition-an-end-to-end-project-a10826
https://github.com/Mjrovai/OpenCV-Face-Recognition.git
https://github.com/sgxu/face-detection-based-on-caffe.git
https://github.com/RiweiChen/DeepFace.git
https://github.com/KatsunoriWa/eval_faceDetectors
https://github.com/BeloborodovDS/MobilenetSSDFace
https://www.pyimagesearch.com/2018/09/03/semantic-segmentation-with-opencv-and-deep-learning/
https://github.com/TimoSaemann/ENet/tree/master/Tutorial
https://blog.amedama.jp/entry/2017/04/03/235901
https://github.com/NVIDIA/nvidia-docker
https://hub.docker.com/r/nvidia/cuda/
https://www.dlology.com/blog/how-to-run-keras-model-on-movidius-neural-compute-stick/
https://ncsforum.movidius.com/discussion/1106/ncs-temperature-issue
https://github.com/opencv/opencv/wiki/Intel%27s-Deep-Learning-Inference-Engine-backend
https://github.com/opencv/opencv/wiki/Intel%27s-Deep-Learning-Inference-Engine-backend#raspbian-stretch
https://github.com/skhameneh/OpenVINO-ARM64

More Repositories

1

PINTO_model_zoo

A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.
Python
3,314
star
2

OpenVINO-YoloV3

YoloV3/tiny-YoloV3+RaspberryPi3/Ubuntu LaptopPC+NCS/NCS2+USB Camera+Python+OpenVINO
Python
535
star
3

onnx2tf

Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massive Transpose extrapolation problem in onnx-tensorflow (onnx-tf). I don't need a Star, but give me a pull request.
Python
517
star
4

Tensorflow-bin

Prebuilt binary with Tensorflow Lite enabled. For RaspberryPi / Jetson Nano. Support for custom operations in MediaPipe. XNNPACK, XNNPACK Multi-Threads, FlexDelegate.
Shell
478
star
5

openvino2tensorflow

This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and pb. PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW). And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and saved_model to .tflite and saved_model to onnx. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support.
Python
318
star
6

simple-onnx-processing-tools

A set of simple tools for splitting, merging, OP deletion, size compression, rewriting attributes and constants, OP generation, change opset, change to the specified input order, addition of OP, RGB to BGR conversion, change batch size, batch rename of OP, and JSON convertion for ONNX models.
Python
240
star
7

tflite2tensorflow

Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support. Supports inverse quantization of INT8 quantization model.
Python
239
star
8

TensorflowLite-bin

Prebuilt binary for TensorFlowLite's standalone installer. For RaspberryPi. A very lightweight installer. I provide a FlexDelegate, MediaPipe Custom OP and XNNPACK enabled binary.
Shell
181
star
9

Keras-OneClassAnomalyDetection

[5 FPS - 150 FPS] Learning Deep Features for One-Class Classification (AnomalyDetection). Corresponds RaspberryPi3. Convert to Tensorflow, ONNX, Caffe, PyTorch. Implementation by Python + OpenVINO/Tensorflow Lite.
Jupyter Notebook
120
star
10

mediapipe-bin

MediaPipe Python Wheel installer for RaspberryPi OS aarch64, Ubuntu aarch64, Debian aarch64 and Jetson Nano.
Shell
113
star
11

MobileNetV2-PoseEstimation

Tensorflow based Fast Pose estimation. OpenVINO, Tensorflow Lite, NCS, NCS2 + Python.
Python
105
star
12

MobileNet-SSD

MobileNet-SSD(MobileNetSSD) + Neural Compute Stick(NCS) Faster than YoloV2 + Explosion speed by RaspberryPi · Multiple moving object detection with high accuracy.
Python
92
star
13

TPU-MobilenetSSD

Edge TPU Accelerator / Multi-TPU + MobileNet-SSD v2 + Python + Async + LattePandaAlpha/RaspberryPi3/LaptopPC
Python
81
star
14

TensorflowLite-UNet

Implementation of UNet by Tensorflow Lite. Semantic segmentation without using GPU with RaspberryPi + Python. In order to maximize the learning efficiency of the model, this learns only the "Person" class of VOC2012. And Comparison with ENet.
Python
78
star
15

wsl2_linux_kernel_usbcam_enable_conf

Configuration file to build the kernel to access the USB camera connected to the host PC using USBIP from inside the WSL2 Ubuntu 20.04/22.04.
Python
74
star
16

HeadPoseEstimation-WHENet-yolov4-onnx-openvino

WHENet - ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L
Python
66
star
17

DMHead

Dual model head pose estimation. Fusion of SOTA models. 360° 6D HeadPose detection. All pre-processing and post-processing are fused together, allowing end-to-end processing in a single inference.
Python
62
star
18

OpenVINO-EmotionRecognition

OpenVINO+NCS2/NCS+MutiModel(FaceDetection, EmotionRecognition)+MultiStick+MultiProcess+MultiThread+USB Camera/PiCamera. RaspberryPi 3 compatible. Async.
Python
56
star
19

whisper-onnx-cpu

ONNX implementation of Whisper. PyTorch free.
Python
54
star
20

MobileNet-SSDLite-RealSense-TF

RaspberryPi3(Raspbian Stretch) + MobileNetv2-SSDLite(Tensorflow/MobileNetv2SSDLite) + RealSense D435 + Tensorflow1.11.0 + without Neural Compute Stick(NCS)
Python
51
star
21

scs4onnx

A very simple tool that compresses the overall size of the ONNX model by aggregating duplicate constant values as much as possible.
Python
50
star
22

whisper-onnx-tensorrt

ONNX and TensorRT implementation of Whisper
Python
44
star
23

OpenVINO-DeeplabV3

[4-5 FPS / Core m3 CPU only] [11 FPS / Core i7 CPU only] OpenVINO+DeeplabV3+LattePandaAlpha/LaptopPC. CPU / GPU / NCS. RealTime semantic-segmentaion. Python3.5+Tensorflow v1.11.0+OpenCV3.4.3+PIL
Python
43
star
24

TPU-Posenet

Edge TPU Accelerator / Multi-TPU / Multi-Model + Posenet/DeeplabV3/MobileNet-SSD + Python + Sync / Async + LaptopPC / RaspberryPi
Python
39
star
25

faster-whisper-env

An environment where you can try out faster-whisper immediately.
Python
36
star
26

hand-gesture-recognition-using-onnx

This is a hand gesture recognition program that replaces the entire MediaPipe process with ONNX. Simultaneous detection of multiple palms and a simple tracker are additionally implemented. In addition, a simple MLP can learn and recognize gestures.
Jupyter Notebook
33
star
27

facemesh_onnx_tensorrt

Verify that the post-processing merged into FaceMesh works correctly. The object detection model can be anything other than BlazeFace. YOLOv4 and FaceMesh committed to this repository have modified post-processing.
Python
30
star
28

yolact_edge_onnx_tensorrt_myriad

Provides a conversion flow for YOLACT_Edge to models compatible with ONNX, TensorRT, OpenVINO and Myriad (OAK). My own implementation of post-processing allows for e2e inference. Support for Multi-Class NonMaximumSuppression, CombinedNonMaxSuppression.
Python
27
star
29

simple_fisheye_calibrator

Simple GUI-based correction of fisheye images. The correction parameters specified on the screen can be diverted to opencv's fisheye correction parameters. Supports execution via Docker.
Python
26
star
30

onnx2json

Exports the ONNX file to a JSON file and JSON dict.
Python
26
star
31

crowdhuman_hollywoodhead_yolo_convert

YOLOv7 training. Generates a head-only dataset in YOLO format. The labels included in the CrowdHuman dataset are Head and FullBody, but ignore FullBody.
Python
26
star
32

mtomo

Multiple types of NN model optimization environments. It is possible to directly access the host PC GUI and the camera to verify the operation. Intel iHD GPU (iGPU) support. NVIDIA GPU (dGPU) support.
Dockerfile
24
star
33

MobileNetv2-SSDLite

My proprietary procedure. Caffe implementation of SSD and SSDLite detection on MobileNetv2, converted from tensorflow.
Python
24
star
34

Bazel_bin

Bazel's pre-built binaries for armv7l / aarch64 / x86_64.
Shell
22
star
35

tflite2json2tflite

Convert tflite to JSON and make it editable in the IDE. It also converts the edited JSON back to tflite binary.
Dockerfile
22
star
36

pytorch4raspberrypi

Cross-compilation of PyTorch armv7l (32bit) for RaspberryPi OS
Dockerfile
20
star
37

scc4onnx

Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.
Python
19
star
38

zumo32u4

Zumo32u4(ATmega32u4) + RaspberryPi3(RaspberryPi) + SLAM(CartoGrapher/Gmapping) + RPLiDAR A1M8
Python
19
star
39

20220228_intel_deeplearning_day_hitnet_demo

Special Presentation Demo at Intel IoT Planet 2021 DeepLearning Day / インテル IoT プラネット 2021 DeepLearning Dayの特別講演の発表資料 https://www.intel.co.jp/content/www/jp/ja/now/iot-planet/deep-learning-day.html
Python
19
star
40

TensorflowLite-flexdelegate

This is a repository for checking the operation of Flex Delegate of Tensorflow.
C++
19
star
41

onnxruntime4raspberrypi

onnxruntime for RaspberryPi armv7l
17
star
42

sit4onnx

Tools for simple inference testing using TensorRT, CUDA and OpenVINO CPU/GPU and CPU providers. Simple Inference Test for ONNX.
Python
17
star
43

json2onnx

Converts a JSON file to an ONNX file.
Python
16
star
44

Open3D-build

Provide Docker build sequences of Open3D for various environments.
Dockerfile
14
star
45

gesture-drone

Drone + OpenVINO + ObjectDetection + FaceRecognition + MobileNetV2 PoseEstiation
Python
14
star
46

OpenVINO-ADAS

[1 FPS / CPU only] OpenVINO+ADAS+LattePandaAlpha. CPU / GPU / NCS. RealTime semantic-segmentaion. Python3.5+OpenCV3.4.3+PIL
Python
14
star
47

sne4onnx

A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB, or simply to separate onnx files to any size you want.
Python
14
star
48

sio4onnx

Simple tool to change the INPUT and OUTPUT shape of ONNX.
Python
13
star
49

OpenVINO-bin

OpenVINO installer storage location (Full version)
Shell
13
star
50

snc4onnx

Simple tool to combine(merge) onnx models. Simple Network Combine Tool for ONNX.
Python
13
star
51

spo4onnx

Simple tool for partial optimization of ONNX. Further optimize some models that cannot be optimized with onnx-optimizer and onnxsim by several tens of percent. In particular, models containing Einsum and OneHot.
Python
13
star
52

jetson-tensorflow-pytorch-build

Provides an environment for compiling TensorFlow or PyTorch with CUDA for aarch64 on an x86 machine. This is for Jetson. If you build using an EC2 m6g.16xlarge (aarch64) instance, TensorFlow can be fully built in about 30 minutes. It can be used as a cross-compilation environment not only for TensorFlow and PyTorch, but also for various other packages and libraries.
Dockerfile
13
star
53

hand_landmark

HandLandmark Detection that can be performed only in onnxruntime. Pre-focusing by skeletal detection is not performed.
Python
12
star
54

PyTorch-build

Provide Docker build sequences of PyTorch for various environments.
Dockerfile
11
star
55

sam4onnx

A very simple tool to rewrite parameters such as attributes and constants for OPs in ONNX models. Simple Attribute and Constant Modifier for ONNX.
Python
11
star
56

BoT-SORT-ONNX-TensorRT

BoT-SORT + YOLOX implemented using only onnxruntime, Numpy and scipy, without cython_bbox and PyTorch. Fast human tracker. OSNet is not used.
Python
11
star
57

Face_Mask_Augmentation

Masked Face Image Augmentation Tool for Dataset 300W-LP with 6D Head Pose Information.
Python
10
star
58

DirectMHP_YOLOv7

I just replaced the DirectMHP backend from YOLOv5 to YOLOv7.
Python
10
star
59

components_of_onnx

[WIP] ONNX parts yard. The various operations described in Operator Schemas are converted in advance into OP stand-alone ONNX files.
Python
8
star
60

tflite-input-output-rewriter

This tool displays tflite signatures and rewrites the input/output OP name to the name of the signature. There is no need to install TensorFlow or TFLite.
Python
8
star
61

rtspserver-ffmpeg

Build an ffmpeg RTSP distribution server using an old alpine:3.8 Docker Image.
Python
8
star
62

human-pose-estimation-3d-python-cpp

Monocular 3D pose estimation. OpenVINO. CPU inference or iGPU (OpenCL) inference.
Python
8
star
63

sbi4onnx

A very simple script that only initializes the batch size of ONNX. Simple Batchsize Initialization for ONNX.
Python
7
star
64

sog4onnx

Simple ONNX operation generator. Simple Operation Generator for ONNX.
Python
7
star
65

simple_camera_capture

Very simple recording tool using only OpenCV. Automatically record the camera capture to mp4, press C key or left mouse button click captures the image.
Python
7
star
66

mmaction2-onnx-export-env

ONNX export environment for mmaction2
Dockerfile
7
star
67

soc4onnx

A very simple tool that forces a change in the opset of an ONNX graph. Simple Opset Changer for ONNX.
Python
6
star
68

RaspberryPi-bin

OS image repository for RaspberryPi3.
Shell
6
star
69

snd4onnx

Simple node deletion tool for onnx.
Python
6
star
70

sed4onnx

Simple ONNX constant encoder/decoder. Since the constant values in the JSON files generated by onnx2json are Base64-encoded values, ASCII <-> Base64 conversion is required when rewriting JSON constant values.
Python
6
star
71

TinyYolo

Challenge the marginal performance of YoloV2 + Neural Compute Stick + RaspberryPi YoloV2+Neural Compute Stick(NCS)+Raspberry Piの限界性能に挑戦
Python
5
star
72

PINTO0309

5
star
73

sor4onnx

Simple OP Renamer for ONNX.
Python
5
star
74

OpenCVonARMv7

Deb package for introducing OpenCV to RaspberryPi3.
5
star
75

realsense-cuda-opengl-docker

RealSense execution environment built on a Docker container on Ubuntu 20.04. NIVIDA GPU and OpenGL capable. CUADA 11.4.
Dockerfile
5
star
76

sna4onnx

Simple node addition tool for onnx. Simple Node Addition for ONNX.
Python
5
star
77

simple-ros2-processing-tools

A set of simple tools for ROS2 of my own making.
Python
5
star
78

soa4onnx

Simple model Output OP Additional tools for ONNX.
Python
4
star
79

300W_LP_AFLW2000_viewer

Python
3
star
80

mmrotate-exec-env

Execution environment of mmrotate
Dockerfile
3
star
81

ssi4onnx

Simple Shape Inference tool for ONNX.
Python
3
star
82

tvm-build

TVM build and run test environment
Dockerfile
3
star
83

edgetpu-bin

Prebuilt binary for EdgeTPU PythonAPI standalone installer.
3
star
84

ssc4onnx

Checker with simple ONNX model structure. Simple Structure Checker for ONNX.
Python
3
star
85

NITEC-ONNX-TensorRT

ONNX implementation of "NITEC: Versatile Hand-Annotated Eye Contact Dataset for Ego-Vision Interaction" https://github.com/thohemp/nitec
Python
3
star
86

Human-Face-Crop-ONNX-TensorRT

Simply crop the face from the image at high speed and save.
Python
3
star
87

TBBonARMv7

RaspberryPi3へのTBB(Intel Threading Building Blocks)導入用debパッケージ保管庫
2
star
88

sod4onnx

Simple model Output OP Deletion tools for ONNX.
Python
2
star
89

sic4onnx

A very simple tool that forces a change in the IR Version of an ONNX graph. Simple IR version Changer for ONNX.
Python
2
star
90

rtspserver-v4l2

RTSP distribution server for USB camera video using v4l2 with Docker container on Ubuntu 20.04.
Python
2
star
91

YoloTrainDataGenerate

Procedures and tools for semi-mechanically automatically generating YoloV2 original learning data from video.
Python
2
star
92

ZED2-Docker

ZED2 SDK Installed Containers
Dockerfile
2
star
93

sde4onnx

Simple doc_string eraser for ONNX.
Python
1
star
94

DeepLearningMugenKnock

Python
1
star
95

SegNet-TF

Tensorflow implementation of SegNet Tensorflow 1.11.0 + Python (I made minor bugfixes for toimcio/SegNet-tensorflow)
Jupyter Notebook
1
star
96

dataset_preparations

Python
1
star
97

svs4onnx

A very simple tool to swap connections between output and input variables in an ONNX graph. Simple Variable Switch for ONNX.
Python
1
star
98

depthai_general_user_docker

Dockerfile
1
star
99

mmaction2-exec-env

Execution environment of mmaction2
Dockerfile
1
star
100

sng4onnx

A simple tool that automatically generates and assigns an OP name to each OP in an old format ONNX file.
Python
1
star