• Stars
    star
    156
  • Rank 238,165 (Top 5 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 9 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Real-time object detection and tracking using Raspberry Pi and OpenCV!

Raspberry Pi Real-Time Object Detection and Tracking

Table of contents generated with markdown-toc

rpi-logo

1. Introduction

Using a Raspberry Pi and a camera module for computer vision with OpenCV (and TensorFlow Lite). The aim of this project is to provide a starting point of using RPi & CV in your own DIY / maker projects. Computer vision based on cameras is very powerful and will bring your project to the next level. This allows you to track complicated objects that would otherwise not be possible with other type of sensors (infrared, ultrasonic, LiDAR, etc).

Note the code is based on Python and OpenCV meaning it is cross-platform. You can run this on other Linux-based platforms as well, e.g. x86/x64 PC, IPC, Jetson, Banana Pi, LattaPanda, BeagleBoard, etc.

2. Dependency

2.1. Packages requirement

This project is dependent on the following packages:

  • Python >= 3.5
  • OpenCV-Python
  • OpenCV-Contrib-Python
  • NumPy
  • SciPy
  • Matplotlib
  • TensorFlow Lite (optional)

2.2. Hardware support

  • Support Raspberry 1 Model B, Raspberry Pi 2, Raspberry Pi Zero and Raspberry Pi 3/4 (preferable)
    • Different boards will have very varied performance: RPi 3/4 are preferable as they have more powerful CPUs; RPi 1/2 may be struggling and produce very low FPS, in which case you can further reduce the camera resolution (160 x 120).
  • Nvidia Jetson Nano (A01) also passed the test.
  • Any USB camera supported by Raspberry Pi
  • The official camera module is NOT yet supported by this code, but you can modify the code to use it (Google Raspberry Pi Offical Camera with OpenCV).
    • (Todo) I will add support in the future.

3. What's in this repository

Currently the following applications are implemented:

  • src/camera-test: Test if the camera is working
  • src/motion-detection: Detect any motion in the frame
  • src/object-tracking-color: Object detection & tracking based on color
  • src/object-tracking-shape: Object detection & tracking based on shape
  • src/object-tracking-feature: Object detection & tracking based on features using ORB
  • src/face-detection: Face detection & tracking
  • (Todo) Object detection using Neural Network (TensorFlow Lite)
  • (Todo) Object detection using YOLO v3 (RPi 4 only)

3.1. Camera Test

Test the RPi and OpenCV environment. You are expected to see a pop-up window that has video streams from your USB camera if everything is set up correctly. If the window does not appear, you need to check both of (1) your environment; (2) camera connection.

alt text

3.2. Motion Detection

Detect object movements in the image and print a warning message if any movement is detected. This detection is based on the mean squared error (MSE) of the difference between two images.

alt text

3.3. Color-based Object Detection and Tracking

Track an object based on its color in HSV and print its center position. You can choose your own color by clicking on the object of interest. Click multiple times on different points so a full color space is coveraged. You can hard code the parameter so you don't need to pick them again for the next run. The following demo shows how I track a Nintendo game controller in real-time:

alt text

3.4. Shape-based Object Detection and Tracking

Detect and track round objects using HoughCircles(). Support of sqaures is coming soon.

alt text

3.5. Feature-based Object Detection and Tracking (with ORB)

Detect and track an object using its feature. The algorithm I selected here is ORB (Oriented FAST and Rotated BRIEF) for its fast calculation speed to enable real-time detection. To use the example, please prepare an Arduino UNO board in hand (or replace the simple.png).

alt text

3.6. Face Detection and Tracking

Detecting face using Harr Cascade detector.

cv_face-detection

3.7. Object Detection using Neural Network (TensorFlow Lite)

(ongoing) Use TensorFlow Lite to recognise objects.

4. How to Run

4.1. Install the environment on Raspberry Pi

sudo apt-get install libopencv-dev
sudo apt-get install libatlas-base-dev
pip3 install virtualenv Pillow numpy scipy matplotlib
pip3 install opencv-python opencv-contrib-python

4.2. Install TensorFlow Lite (optional; only if you want to use the neural network example)

wget https://github.com/PINTO0309/Tensorflow-bin/raw/master/tensorflow-2.1.0-cp37-cp37m-linux_armv7l.whl
pip3 install --upgrade setuptools
pip3 install tensorflow-2.1.0-cp37-cp37m-linux_armv7l.whl
pip3 install -e .

4.3. Run the scripts

Run scripts in the /src folder by: python3 src/$FOLDER_NAME$/$SCRIPT_NAME$.py

To stop the code, press the ESC key on your keyboard.

4.4. Change camera resolution

Changing the resolution will significantly impact the FPS. By default it is set to be 320 x 240, but you can change it to any value that your camera supports at the beginning of each source code (defined by IMAGE_WIDTH and IMAGE_HEIGHT). Typical resolutions are:

  • 160 x 120
  • 320 x 240
  • 640 x 480 (480p)
  • 1280 x 720 (720p)
  • 1920 x 1080 (1080p; make sure your camera supports this high resolution.)

5. Q&A

Q1: Does this support Nvidia Jetson?
A1: Yes. I have tested with my Jetson Nano 4GB.

Q2: Does this support the Raspberry Pi camera?
A2: Not at the moment but I will do it later (if this is not that difficult).

License

© This source code is licensed under the MIT License.

More Repositories

1

realtime-embedded-conferences

Tracking conferences in Real-time Systems, Embedded Systems, Design Automation, Cyber-Pyhsical Systems, and Robotics
HTML
92
star
2

dag-gen-rnd

dag-gen-rnd: A randomized Multi-DAG task generator for scheduling and allocation research
Python
33
star
3

research-sched-tsn

Experiment code (MATLAB) for "Fixed-Priority Scheduling and Controller Co-Design for Time-Sensitive Network", ICCAD, 2020.
MATLAB
28
star
4

research-dag-scheduling-analysis

Experiments and evaluation for the paper "DAG Scheduling and Analysis on Multiprocessor Systems: Exploitation of Parallelism and Dependency", RTSS 2020. (artifacts evaluation passed)
17
star
5

yfips-indoor-positioning-system

(Work-in-progress) YF-IPS: use low-cost sensors for multi-robot indoor positioning, tracking and navigation.
Python
7
star
6

processing-space-travelling

Space-travelling: A computer generated art programmed using Processing.
Processing
4
star
7

multi-robot-scheduling-unity-ros

Implementation of a multi-robot environment for warehouse scheduling in Unity.
C#
3
star
8

rpi-pico-nyan-cat

RPi Pico + Pimoroni LCD to create the Nyan Cat meme animation.
Python
3
star
9

arduino-yaoji

Yaoji is an open-source digital plant based on Arduino and Android.
Arduino
3
star
10

processing-matrix-rain

Matrix Animation. Created with Processing.
Processing
3
star
11

research-period-adaptation

MATLAB code for paper “Period Adaptation of Real-Time Control Tasks with Fixed-Priority Scheduling in Cyber-Physical Systems”, Journal of Systems Architecture (JSA), 2019.
MATLAB
3
star
12

research-wcet-trend-analysis

MATLAB code for paper “Predicting Worst-Case Execution Times Trends in Long-lived Cyber-Physical Sytems”, Ada-Europe, 2017.
Java
2
star
13

rpi-environmental-sensing

An open-source domestic environment sensing system built with Raspberry Pi Zero W + HTU21D + AM2306 + PMS7003. Supports report to MySQL and MQTT broker.
Python
2
star
14

rts-blog

Real-time systems knowledge that you will not easily find elsewhere.
Shell
2
star
15

research-dual-period

MATLAB & Simulink code of paper “Exploiting a Dual-Mode Strategy for Performance-Maximization and Resource-Efficient CPS Design”, EMSOFT, 2019.
MATLAB
2
star
16

yf-smart-home-iot

IoT system for smart home and home automation.
Python
2
star
17

esp32-iot-htu21d-mqtt

A low-power environmental sensing device using ESP32 + HTU21D. Supports MQTT and an OLED display.
C++
1
star
18

blynk-http-python-lib

A python library for exchanging data with Blynk using RESTful APIs.
Python
1
star
19

processing-cyberpunk-city

Cyberpunk city --- A computer generated art programmed by Processing.
Processing
1
star
20

BNO055-arduino-processing-bunny

Processing
1
star
21

processing-doge

This interactive program made with Processing makes doge alive.
Processing
1
star
22

robot-arm-webgl

Simulate a robot arm with three.js and WebGL.
JavaScript
1
star
23

rpi-mini-camera

Mini DSLR camera using Raspberry Pi and the high quality camera.
Python
1
star
24

rpi-pico-robot

Robot running environment and libraries (C/C++) using Raspberry Pi Pico.
1
star