TensorFlow Object Counting API
The TensorFlow Object Counting API is an open source framework built on top of TensorFlow and Keras that makes it easy to develop object counting systems. Please contact if you need professional object detection & tracking & counting project with the super high accuracy and reliability!
QUICK DEMO
Cumulative Counting Mode (TensorFlow implementation):
Real-Time Counting Mode (TensorFlow implementation):
Object Tracking Mode (TensorFlow implementation):
- Tracking module was built on top of this approach.
Object Counting On Single Image (TensorFlow implementation):
Keras and TensorFlow implementation):
Object Counting based R-CNN (Keras and TensorFlow implementation):
Object Segmentation & Counting based Mask R-CNN (BONUS: Custom Object Counting Mode (TensorFlow implementation):
You can train TensorFlow models with your own training data to built your own custom object counter system! If you want to learn how to do it, please check one of the sample projects, which cover some of the theory of transfer learning and show how to apply it in useful projects, are given at below.
Sample Project#1: Smurf Counting
More info can be found in here!
Sample Project#2: Barilla-Spaghetti Counting
More info can be found in here!
The development is on progress! The API will be updated soon, the more talented and light-weight API will be available in this repo!
- Detailed API documentation and sample jupyter notebooks that explain basic usages of API will be added!
You can find a sample project - case study that uses TensorFlow Object Counting API in this repo.
USAGE
1.) Usage of "Cumulative Counting Mode"
1.1) For detecting, tracking and counting the pedestrians with disabled color prediction
Usage of "Cumulative Counting Mode" for the "pedestrian counting" case:
is_color_recognition_enabled = False # set it to true for enabling the color prediction for the detected objects
roi = 385 # roi line position
deviation = 1 # the constant that represents the object counting area
object_counting_api.cumulative_object_counting_x_axis(input_video, detection_graph, category_index, is_color_recognition_enabled, roi, deviation) # counting all the objects
Result of the "pedestrian counting" case:
Source code of "pedestrian counting case-study": pedestrian_counting.py
1.2) For detecting, tracking and counting the vehicles with enabled color prediction
Usage of "Cumulative Counting Mode" for the "vehicle counting" case:
is_color_recognition_enabled = True # set it to true for enabling the color prediction for the detected objects
roi = 200 # roi line position
deviation = 3 # the constant that represents the object counting area
object_counting_api.cumulative_object_counting_y_axis(input_video, detection_graph, category_index, is_color_recognition_enabled, roi, deviation) # counting all the objects
Result of the "vehicle counting" case:
Source code of "vehicle counting case-study": vehicle_counting.py
2.) Usage of "Real-Time Counting Mode"
2.1) For detecting, tracking and counting the targeted object/s with disabled color prediction
Usage of "the targeted object is bicycle":
is_color_recognition_enabled = False # set it to true for enabling the color prediction for the detected objects
targeted_objects = "bicycle"
object_counting_api.targeted_object_counting(input_video, detection_graph, category_index, is_color_recognition_enabled, targeted_objects) # targeted objects counting
Result of "the targeted object is bicycle":
Usage of "the targeted object is person":
is_color_recognition_enabled = False # set it to true for enabling the color prediction for the detected objects
targeted_objects = "person"
object_counting_api.targeted_object_counting(input_video, detection_graph, category_index, is_color_recognition_enabled, targeted_objects) # targeted objects counting
Result of "the targeted object is person":
Usage of "detecting, counting and tracking all the objects":
is_color_recognition_enabled = False # set it to true for enabling the color prediction for the detected objects
object_counting_api.object_counting(input_video, detection_graph, category_index, is_color_recognition_enabled) # counting all the objects
Result of "detecting, counting and tracking all the objects":
Usage of "detecting, counting and tracking the multiple targeted objects":
targeted_objects = "person, bicycle" # (for counting targeted objects) change it with your targeted objects
is_color_recognition_enabled = False # set it to true for enabling the color prediction for the detected objects
object_counting_api.targeted_object_counting(input_video, detection_graph, category_index, is_color_recognition_enabled, targeted_objects) # targeted objects counting
2.2) For detecting, tracking and counting "all the objects with disabled color prediction"
Usage of detecting, counting and tracking "all the objects with disabled color prediction":
is_color_recognition_enabled = False # set it to true for enabling the color prediction for the detected objects
object_counting_api.object_counting(input_video, detection_graph, category_index, is_color_recognition_enabled) # counting all the objects
Result of detecting, counting and tracking "all the objects with disabled color prediction":
Usage of detecting, counting and tracking "all the objects with enabled color prediction":
is_color_recognition_enabled = True # set it to true for enabling the color prediction for the detected objects
object_counting_api.object_counting(input_video, detection_graph, category_index, is_color_recognition_enabled) # counting all the objects
Result of detecting, counting and tracking "all the objects with enabled color prediction":
3.) Usage of "Object Tracking Mode"
Just run object_tracking.py
For sample usages of "Real-Time Counting Mode": real_time_counting.py
The minimum object detection threshold can be set in this line in terms of percentage. The default minimum object detecion threshold is 0.5!
General Capabilities of The TensorFlow Object Counting API
Here are some cool capabilities of TensorFlow Object Counting API:
- Detect just the targeted objects
- Detect all the objects
- Count just the targeted objects
- Count all the objects
- Predict color of the targeted objects
- Predict color of all the objects
- Predict speed of the targeted objects
- Predict speed of all the objects
- Print out the detection-counting result in a .csv file as an analysis report
- Save and store detected objects as new images under detected_object folder
- Select, download and use state of the art models that are trained by Google Brain Team
- Use your own trained models or a fine-tuned model to detect spesific object/s
- Save detection and counting results as a new video or show detection and counting results in real time
- Process images or videos depending on your requirements
Here are some cool architectural design features of TensorFlow Object Counting API:
- Lightweigth, runs in real-time
- Scalable and well-designed framework, easy usage
- Gets "Pythonic Approach" advantages
- It supports REST Architecture and RESTful Web Services
TODOs:
- TensorFlox2.x support will be provided.
- Autonomus Training Image Annotation Tool will be developed.
- GUI will be developed.
Theory
System Architecture
-
Object detection and classification have been developed on top of TensorFlow Object Detection API, see for more info.
-
Object color prediction has been developed using OpenCV via K-Nearest Neighbors Machine Learning Classification Algorithm is Trained Color Histogram Features, see for more info.
TensorFlowโข is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them.
OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products.
Tracker
Source video is read frame by frame with OpenCV. Each frames is processed by "SSD with Mobilenet" model is developed on TensorFlow. This is a loop that continue working till reaching end of the video. The main pipeline of the tracker is given at the above Figure.
Models
By default I use an "SSD with Mobilenet" model in this project. You can find more information about SSD in here.
Please, See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies. You can easily select, download and use state-of-the-art models that are suitable for your requeirements using TensorFlow Object Detection API.
You can perform transfer learning on trained TensorFlow models to build your custom object counting systems!
Project Demo
Demo video of the project is available on My YouTube Channel.
Installation
Dependencies
Tensorflow Object Counting API depends on the following libraries (see requirements.txt):
- TensorFlow Object Detection API
- tensorflow==1.5.0
- keras==2.0.8
- opencv-python==4.4.0.42
- Protobuf 3.0.0
- Python-tk
- Pillow 1.0
- lxml
- tf Slim (which is included in the "tensorflow/models/research/" checkout)
- Jupyter notebook
- Matplotlib
- Cython
- contextlib2
- cocoapi
For detailed steps to install Tensorflow, follow the Tensorflow installation instructions.
TensorFlow Object Detection API have to be installed to run TensorFlow Object Counting API, for more information, please see this.
Important: Compatibility problems caused by TensorFlow2 version.
This project developed with TensorFlow 1.5.0 version. If you need to run this project with TensorFlow 2.x version, just replace tensorflow imports with tensorflow.compat.v1, and add tf.disable_v2_behavior that's all.
Instead of this import statement:
import tensorflow
use this:
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
Citation
If you use this code for your publications, please cite it as:
@ONLINE{
author = "Ahmet รzlรผ",
title = "TensorFlow Object Counting API",
year = "2018",
url = "https://github.com/ahmetozlu/tensorflow_object_counting_api"
}
Author
Ahmet รzlรผ
License
This system is available under the MIT license. See the LICENSE file for more info.