• This repository has been archived on 19/Mar/2023
  • Stars
    star
    411
  • Rank 105,247 (Top 3 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 6 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Home Assistant custom component for using Deepstack object detection

HASS-Deepstack-object

Home Assistant custom component for Deepstack object detection. Deepstack is a service which runs in a docker container and exposes various computer vision models via a REST API. Deepstack object detection can identify 80 different kinds of objects (listed at bottom of this readme), including people (person), vehicles and animals. Alternatively a custom object detection model can be used. There is no cost for using Deepstack and it is fully open source. To run Deepstack you will need a machine with 8 GB RAM, or an NVIDIA Jetson.

On your machine with docker, run Deepstack with the object detection service active on port 80:

docker run -e VISION-DETECTION=True -e API-KEY="mysecretkey" -v localstorage:/datastore -p 80:5000 deepquestai/deepstack

Usage of this component

The deepstack_object component adds an image_processing entity where the state of the entity is the total count of target objects that are above a confidence threshold which has a default value of 80%. You can have a single target object class, or multiple. The time of the last detection of any target object is in the last target detection attribute. The type and number of objects (of any confidence) is listed in the summary attributes. Optionally a region of interest (ROI) can be configured, and only objects with their center (represented by a x) within the ROI will be included in the state count. The ROI will be displayed as a green box, and objects with their center in the ROI have a red box.

Also optionally the processed image can be saved to disk, with bounding boxes showing the location of detected objects. If save_file_folder is configured, an image with filename of format deepstack_object_{source name}_latest.jpg is over-written on each new detection of a target. Optionally this image can also be saved with a timestamp in the filename, if save_timestamped_file is configured as True. An event deepstack.object_detected is fired for each object detected that is in the targets list, and meets the confidence and ROI criteria. If you are a power user with advanced needs such as zoning detections or you want to track multiple object types, you will need to use the deepstack.object_detected events.

Note that by default the component will not automatically scan images, but requires you to call the image_processing.scan service e.g. using an automation triggered by motion.

Home Assistant setup

Place the custom_components folder in your configuration directory (or add its contents to an existing custom_components folder). Then configure object detection. Important: It is necessary to configure only a single camera per deepstack_object entity. If you want to process multiple cameras, you will therefore need multiple deepstack_object image_processing entities.

The component can optionally save snapshots of the processed images. If you would like to use this option, you need to create a folder where the snapshots will be stored. The folder should be in the same folder where your configuration.yaml file is located. In the example below, we have named the folder snapshots.

Add to your Home-Assistant config:

image_processing:
  - platform: deepstack_object
    ip_address: localhost
    port: 80
    api_key: mysecretkey
    # custom_model: mask
    # confidence: 80
    save_file_folder: /config/snapshots/
    save_file_format: png
    save_timestamped_file: True
    always_save_latest_file: True
    scale: 0.75
    # roi_x_min: 0.35
    roi_x_max: 0.8
    #roi_y_min: 0.4
    roi_y_max: 0.8
    crop_to_roi: True
    targets:
      - target: person
      - target: vehicle
        confidence: 60
      - target: car
        confidence: 40
    source:
      - entity_id: camera.local_file

Configuration variables:

  • ip_address: the ip address of your deepstack instance.
  • port: the port of your deepstack instance.
  • api_key: (Optional) Any API key you have set.
  • timeout: (Optional, default 10 seconds) The timeout for requests to deepstack.
  • custom_model: (Optional) The name of a custom model if you are using one. Don't forget to add the targets from the custom model below
  • confidence: (Optional) The confidence (in %) above which detected targets are counted in the sensor state. Default value: 80
  • save_file_folder: (Optional) The folder to save processed images to. Note that folder path should be added to whitelist_external_dirs
  • save_file_format: (Optional, default jpg, alternatively png) The file format to save images as. png generally results in easier to read annotations.
  • save_timestamped_file: (Optional, default False, requires save_file_folder to be configured) Save the processed image with the time of detection in the filename.
  • always_save_latest_file: (Optional, default False, requires save_file_folder to be configured) Always save the last processed image, even if there were no detections.
  • scale: (optional, default 1.0), range 0.1-1.0, applies a scaling factor to the images that are saved. This reduces the disk space used by saved images, and is especially beneficial when using high resolution cameras.
  • show_boxes: (optional, default True), if False bounding boxes are not shown on saved images
  • roi_x_min: (optional, default 0), range 0-1, must be less than roi_x_max
  • roi_x_max: (optional, default 1), range 0-1, must be more than roi_x_min
  • roi_y_min: (optional, default 0), range 0-1, must be less than roi_y_max
  • roi_y_max: (optional, default 1), range 0-1, must be more than roi_y_min
  • crop_to_roi: (optional, default False), crops the image to the specified roi. May improve object detection accuracy when a region-of-interest is applied
  • source: Must be a camera.
  • targets: The list of target object names and/or object_type, default person. Optionally a confidence can be set for this target, if not the default confidence is used. Note the minimum possible confidence is 10%.

For the ROI, the (x=0,y=0) position is the top left pixel of the image, and the (x=1,y=1) position is the bottom right pixel of the image. It might seem a bit odd to have y running from top to bottom of the image, but that is the coordinate system used by pillow.

I created an app for exploring the config parameters at https://github.com/robmarkcole/deepstack-ui

Event deepstack.object_detected

An event deepstack.object_detected is fired for each object detected above the configured confidence threshold. This is the recommended way to check the confidence of detections, and to keep track of objects that are not configured as the target (use Developer tools -> EVENTS -> :Listen to events, to monitor these events).

An example use case for event is to get an alert when some rarely appearing object is detected, or to increment a counter. The deepstack.object_detected event payload includes:

  • entity_id : the entity id responsible for the event
  • name : the name of the object detected
  • object_type : the type of the object, from person, vehicle, animal or other
  • confidence : the confidence in detection in the range 0 - 100%
  • box : the bounding box of the object
  • centroid : the centre point of the object
  • saved_file : the path to the saved annotated image, which is the timestamped file if save_timestamped_file is True, or the default saved image if False

An example automation using the deepstack.object_detected event is given below:

- action:
    - data_template:
        caption: "New person detection with confidence {{ trigger.event.data.confidence }}"
        file: "{{ trigger.event.data.saved_file  }}"
      service: telegram_bot.send_photo
  alias: Object detection automation
  condition: []
  id: "1120092824622"
  trigger:
    - platform: event
      event_type: deepstack.object_detected
      event_data:
        name: person

Displaying the deepstack latest jpg file

It easy to display the deepstack_object_{source name}_latest.jpg image with a local_file camera. An example configuration is:

camera:
  - platform: local_file
    file_path: /config/snapshots/deepstack_object_local_file_latest.jpg
    name: deepstack_latest_person

Info on box

The box coordinates and the box center (centroid) can be used to determine whether an object falls within a defined region-of-interest (ROI). This can be useful to include/exclude objects by their location in the image.

  • The box is defined by the tuple (y_min, x_min, y_max, x_max) (equivalent to image top, left, bottom, right) where the coordinates are floats in the range [0.0, 1.0] and relative to the width and height of the image.
  • The centroid is in (x,y) coordinates where (0,0) is the top left hand corner of the image and (1,1) is the bottom right corner of the image.

Browsing saved images in HA

I highly recommend using the Home Assistant Media Player Browser to browse and preview processed images. Add to your config something like:

homeassistant:
.
.
  whitelist_external_dirs:
    - /config/images/
  media_dirs:
    local: /config/images/

media_source:

And configure Deepstack to use the above directory for save_file_folder, then saved images can be browsed from the HA front end like below:

Face recognition

For face recognition with Deepstack use https://github.com/robmarkcole/HASS-Deepstack-face

Support

For code related issues such as suspected bugs, please open an issue on this repo. For general chat or to discuss Home Assistant specific issues related to configuration or use cases, please use this thread on the Home Assistant forums.

Docker tips

Add the -d flag to run the container in background

FAQ

Q1: I get the following warning, is this normal?

2019-01-15 06:37:52 WARNING (MainThread) [homeassistant.loader] You are using a custom component for image_processing.deepstack_face which has not been tested by Home Assistant. This component might cause stability problems, be sure to disable it if you do experience issues with Home Assistant.

A1: Yes this is normal


Q4: What are the minimum hardware requirements for running Deepstack?

A4. Based on my experience, I would allow 0.5 GB RAM per model.


Q5: Can object detection be configured to detect car/car colour?

A5: The list of detected object classes is at the end of the page here. There is no support for detecting the colour of an object.


Q6: I am getting an error from Home Assistant: Platform error: image_processing - Integration deepstack_object not found

A6: This can happen when you are running in Docker/Hassio, and indicates that one of the dependencies isn't installed. It is necessary to reboot your Hassio device, or rebuild your Docker container. Note that just restarting Home Assistant will not resolve this.


Objects

The following lists all valid target object names:

person,   bicycle,   car,   motorcycle,   airplane,
bus,   train,   truck,   boat,   traffic light,   fire hydrant,   stop_sign,
parking meter,   bench,   bird,   cat,   dog,   horse,   sheep,   cow,   elephant,
bear,   zebra, giraffe,   backpack,   umbrella,   handbag,   tie,   suitcase,
frisbee,   skis,   snowboard, sports ball,   kite,   baseball bat,   baseball glove,
skateboard,   surfboard,   tennis racket, bottle,   wine glass,   cup,   fork,
knife,   spoon,   bowl,   banana,   apple,   sandwich,   orange, broccoli,   carrot,
hot dog,   pizza,   donut,   cake,   chair,   couch,   potted plant,   bed, dining table,
toilet,   tv,   laptop,   mouse,   remote,   keyboard,   cell phone,   microwave,
oven,   toaster,   sink,   refrigerator,   book,   clock,   vase,   scissors,   teddy bear,
hair dryer, toothbrush.

Objects are grouped by the following object_type:

  • person: person
  • animal: bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe
  • vehicle: bicycle, car, motorcycle, airplane, bus, train, truck
  • other: any object that is not in person, animal or vehicle

Development

Currently only the helper functions are tested, using pytest.

  • python3 -m venv venv
  • source venv/bin/activate
  • pip install -r requirements-dev.txt
  • venv/bin/py.test custom_components/deepstack_object/tests.py -vv -p no:warnings

Videos of usage

Checkout this excellent video of usage from Everything Smart Home

Also see the video of a presentation I did to the IceVision community on deploying Deepstack on a Jetson nano.

More Repositories

1

Hue-sensors-HASS

Support for Hue motion sensors and device tracker
Python
347
star
2

fire-detection-from-images

Detect fire in images using neural nets
Jupyter Notebook
284
star
3

HASS-Deepstack-face

Home Assistant custom component for using Deepstack face recognition
Jupyter Notebook
202
star
4

mqtt-camera-streamer

Stream images from a connected camera over MQTT, view using Streamlit, record to file and sqlite
Python
194
star
5

HASS-data-detective

Explore and analyse your Home Assistant data
Python
183
star
6

yolov5-flask

Official implementation at https://github.com/ultralytics/yolov5/tree/master/utils/flask_rest_api
Python
155
star
7

deepstack-ui

UI for working with Deepstack
Python
121
star
8

HASS-plate-recognizer

Read number plates with https://platerecognizer.com/
Python
86
star
9

HASS-amazon-rekognition

Home Assistant Object detection with Amazon Rekognition
Jupyter Notebook
84
star
10

Useful-python

Python code and notebooks for reference
Jupyter Notebook
76
star
11

tensorflow-lite-rest-server

Expose tensorflow-lite models via a rest API using FastAPI
Jupyter Notebook
73
star
12

object-detection-app

Simple object detection app with streamlit
Python
73
star
13

coral-pi-rest-server

Perform inferencing of tensorflow-lite models on an RPi with acceleration from Coral USB stick
Jupyter Notebook
61
star
14

rpi-enviro-mqtt

Send air quality data from a Pimoroni RPi Enviro+ over MQTT
Jupyter Notebook
43
star
15

streamlit-image-juxtapose

A simple Streamlit Component to compare images in Streamlit apps. It integrates Knightlab's JuxtaposeJS
Python
33
star
16

HASS-Sighthound

Beta features for Home Assistant Sighthound integration
Python
33
star
17

Hue-remotes-HASS

PLEASE READ THE README
Python
31
star
18

Useful-python-for-medical-physics

Scripts that have been useful. Includes some analysis of EGSnrc 3ddose files
Jupyter Notebook
24
star
19

HASS-Machinebox-Facebox

Home Assistant face detection using Machinebox.io
Jupyter Notebook
24
star
20

HASS-Google-Vision

Instead use https://github.com/robmarkcole/HASS-amazon-rekognition
Jupyter Notebook
21
star
21

HASS-Machinebox-Classificationbox

Home-Assistant image classification using Machinebox.io
Jupyter Notebook
21
star
22

HASS-Google-Coral

RETIRED - instead use https://github.com/robmarkcole/HASS-Deepstack-object
Jupyter Notebook
18
star
23

text-insights-app

Upload an image of a document and extract text, names, facts and figures
Python
18
star
24

streamlit-segmentation-app

streamlit app for binary segmentation
Python
17
star
25

HASS-S3

Home Assistant integration for S3
Python
16
star
26

kaggle-ships-in-Google-Earth-with-YOLOv8

Applying YOLOv8 to Kaggle Ships in Google Earth dataset
Jupyter Notebook
16
star
27

HASS-Google-Cloud-SQL

Guide on using Google Cloud SQL as a database recorder for Home-assistant
Jupyter Notebook
15
star
28

bme680-mqtt

Publish bme680 data via MQTT
Python
14
star
29

bme680-mqtt-micropython

Publish data from the bme680 sensor over MQTT using micropython
Python
14
star
30

HASS-amazon-rekognition-text

Home Assistant integration to extract text from digital and mechanical displays using AWS rekognition
Python
14
star
31

HASS-Deepstack-scene

Home Assistant custom integration for using Deepstack scene recognition
Python
13
star
32

HASS-data-science

Data science with Home-assistant
Jupyter Notebook
13
star
33

robins-homeassistant-config

My Home-assistant config
Python
13
star
34

deepstack-python

Unofficial python API for DeepStack
Jupyter Notebook
12
star
35

python-scripts-for-home-assistant

Python scripts for use with the home-assistant python_scripts component
Python
11
star
36

TrasportAPI-HASS

UK bus & train status TransportAPI Home-assistant component ADDED TO HA
Jupyter Notebook
11
star
37

hassio-addons

Addons for Home Assistant
Dockerfile
10
star
38

robins-hassio-config

My home-assistant config from my experimental hassio instance
10
star
39

satellite-imagery-projects

Jupyter Notebook
10
star
40

HASS-filesize-sensor

Custom component for displaying the size (in MB) of files - ADDED TO HA 0.64
Jupyter Notebook
10
star
41

yolov5-fastapi

FastAPI app exposing yolov5 object detection
Python
10
star
42

simple-fastAPI-webapp

Use fastAPI to generate html web app that will serve a local directory or S3 bucket of images
Python
10
star
43

HASS-Clarifai

Home-Assistant image tagging with Clarifai https://clarifai.com/developer/guide/
Jupyter Notebook
9
star
44

HASS-BBC-envirobit

Stream sensor readings from the BBC micropython envirobit to Home-Assistant
Python
9
star
45

Medical_physics_imageJ

Scripts and plugins for performing analysis of images in imageJ
Python
9
star
46

deepstack-analytics

Analytics with deepstack
Jupyter Notebook
9
star
47

HASS-hammerspoon

hammerspoon script to toggle a Home-assistant switch on wake/sleep of my MacBook
Lua
8
star
48

HASS-circuitpython-air-quality-sensor-node

A circuitpython board with various air quality sensors, data processed by Home Assistant
Jupyter Notebook
8
star
49

HASS-kalman-filter

Home-Assistant custom integration adding a 1D Kalman filter
Jupyter Notebook
8
star
50

pan-tilt-hat-HASS

Custom component adding the pimoroni pan-tilt-hat to Home-assistant: tested in HA 0.94 ok
Python
7
star
51

tensorflow_files_for_home_assistant_component

All the files you need for the Home-Assistant tensorflow component
Python
7
star
52

arxiv-github-scanner

Streamlit app for querying the arxiv api
Python
5
star
53

London-tube-status

Fetch the tube status in a python dictionary
Jupyter Notebook
5
star
54

HASS-folder-sensor

Home-assistant custom component for monitoring the contents of a folder - ADDED TO HA 0.64
Jupyter Notebook
5
star
55

circuitpython-on-home-assistant

Programming and managing circuitpython boards from Home-Assistant
Python
5
star
56

HASS-Machinebox-Tagbox

Home-Assistant custom integration for image tag detection using Tagbox
Python
5
star
57

aerotech

Class to enable control of the Ensemble controllers by Aerotech using TCP
Jupyter Notebook
5
star
58

HASS-rest-image-process

Home-assistant component for image processing via local REST API machinebox.io
Python
4
star
59

aws-lambda-pytorch-image-classification-example

Example of implementing a pytorch image classifier service using AWS lambda
Python
4
star
60

streamlit-codespace

tryout Streamlit in a Github Codespace
4
star
61

wildlife-camera-trap-data-visualsation-app

Visualise wildlife camera trap data
4
star
62

robmarkcole

3
star
63

Hue-sensors

Standalone package for parsing the Hue API data for Hue sensors
Jupyter Notebook
3
star
64

streamlit-image-table-pandas-app

Place images in a table using pandas and generate a shareable report
HTML
3
star
65

HASS-Yolo

Object detection in Home-Assistant using Yolo
3
star
66

RF-doorbell-serial

Using an Arduino with RF receiver to detect when my doorbell has been pressed
Arduino
3
star
67

arduino-tensorflow-example

Code and results from https://medium.com/p/7daf95b4157
C
3
star
68

HASS-data-detective-analysis

Analysis using the HASS-data-detective package
Jupyter Notebook
3
star
69

tagbox_python

A python script to teach Machinebox/Tagbox
Python
2
star
70

fastpages-blog

My blog
Jupyter Notebook
2
star
71

fastAPI-chatGPT-experiment

Use chatGPT to create a simple fastAPI app
Python
2
star
72

quickstart-with-geotiffs

A quickstart guide to working with geotiffs
Jupyter Notebook
2
star
73

rpi-rf-mqtt

Monitor a 433 MHz signal and post to an MQTT topic - WIP - do not use
Python
2
star
74

yolov5-ui

Web ui for yolov5 using Streamlit
Python
2
star
75

jupyter-codespace

Tryout running jupyter notebooks in a Github Codespace
Jupyter Notebook
2
star
76

jupyterlite-playground

Jupyter Notebook
2
star
77

London-Air-Quality

Queries the London air quality data feed provided by Kings College London
Jupyter Notebook
2
star
78

HASS-Photo-browser

Instead use media_dirs: local: /config/images/
Jupyter Notebook
2
star
79

classificationbox_python

Python script for teaching Classificationbox image classes
Jupyter Notebook
2
star
80

simplehound

Unofficial python API for Sighthound Cloud
Jupyter Notebook
1
star
81

HASS-mqtt-camera-forwarder

Custom integration which forwards a camera feed onto an MQTT topic
Python
1
star
82

Hue-sensors-phue

Identical to Hue-sensors but using the phue package
Jupyter Notebook
1
star
83

robins-google-colaboratory

My Google CoLaboratory notebooks
Jupyter Notebook
1
star
84

google-app-engine-flask-example

Working through Building a Python 3 App on App Engine tutorial
Dockerfile
1
star
85

artist-classification-with-ML

Train an "off-the-shelf" deep image network with color images of Impressionist paintings from the web (Seurat, Signac, Monet, Degas, Renoir, ...) for classification of both artist and simple subject matter
Jupyter Notebook
1
star
86

HASS-SigFox

SigFox component for Home-Assistant
Jupyter Notebook
1
star
87

reproducible-satellite-image-analysis

Use Binder for reproducible satellite image analysis
Jupyter Notebook
1
star
88

water-sensor-micropython

Simple micropython script to detect water using a dirt cheap sensor
Python
1
star
89

HASS-rest-camera

Custom component for a camera which displays images served by a REST API
Python
1
star
90

umap-image-embedding-streamlit-app

App to explore umap image embeddings for MNIST class datasets
Jupyter Notebook
1
star