• Stars
    star
    587
  • Rank 75,613 (Top 2 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 7 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A Python package to stabilize videos using OpenCV

Python Video Stabilization

Build Status codecov Maintainability PyPi version Last Commit Downloads

Python video stabilization using OpenCV. Full searchable documentation here.

This module contains a single class (VidStab) used for video stabilization. This class is based on the work presented by Nghia Ho in SIMPLE VIDEO STABILIZATION USING OPENCV. The foundation code was found in a comment on Nghia Ho's post by the commenter with username koala.

Input Output

Video used with permission from HappyLiving

Contents:

  1. Installation
  2. Basic Usage
  3. Advanced Usage

Installation

+ Please report issues if you install/try to install and run into problems!

Install vidstab without installing OpenCV

If you've already built OpenCV with python bindings on your machine it is recommended to install vidstab without installing the pypi versions of OpenCV. The opencv-python python module can cause issues if you've already built OpenCV from source in your environment.

The below commands will install vidstab without OpenCV included.

From PyPi

pip install vidstab

From GitHub

pip install git+https://github.com/AdamSpannbauer/python_video_stab.git

Install vidstab & OpenCV

If you don't have OpenCV installed already there are a couple options.

  1. You can build OpenCV using one of the great online tutorials from PyImageSearch, LearnOpenCV, or OpenCV themselves. When building from source you have more options (e.g. platform optimization), but more responsibility. Once installed you can use the pip install command shown above.
  2. You can install a pre-built distribution of OpenCV from pypi as a dependency for vidstab (see command below)

The below commands will install vidstab with opencv-contrib-python as dependencies.

From PyPi

pip install vidstab[cv2]

From Github

 pip install -e git+https://github.com/AdamSpannbauer/python_video_stab.git#egg=vidstab[cv2]

Basic usage

The VidStab class can be used as a command line script or in your own custom python code.

Using from command line

# Using defaults
python3 -m vidstab --input input_video.mov --output stable_video.avi
# Using a specific keypoint detector
python3 -m vidstab -i input_video.mov -o stable_video.avi -k GFTT

Using VidStab class

from vidstab import VidStab

# Using defaults
stabilizer = VidStab()
stabilizer.stabilize(input_path='input_video.mov', output_path='stable_video.avi')

# Using a specific keypoint detector
stabilizer = VidStab(kp_method='ORB')
stabilizer.stabilize(input_path='input_video.mp4', output_path='stable_video.avi')

# Using a specific keypoint detector and customizing keypoint parameters
stabilizer = VidStab(kp_method='FAST', threshold=42, nonmaxSuppression=False)
stabilizer.stabilize(input_path='input_video.mov', output_path='stable_video.avi')

Advanced usage

Plotting frame to frame transformations

from vidstab import VidStab
import matplotlib.pyplot as plt

stabilizer = VidStab()
stabilizer.stabilize(input_path='input_video.mov', output_path='stable_video.avi')

stabilizer.plot_trajectory()
plt.show()

stabilizer.plot_transforms()
plt.show()
Trajectories Transforms

Using borders

from vidstab import VidStab

stabilizer = VidStab()

# black borders
stabilizer.stabilize(input_path='input_video.mov', 
                     output_path='stable_video.avi', 
                     border_type='black')
stabilizer.stabilize(input_path='input_video.mov', 
                     output_path='wide_stable_video.avi', 
                     border_type='black', 
                     border_size=100)

# filled in borders
stabilizer.stabilize(input_path='input_video.mov', 
                     output_path='ref_stable_video.avi', 
                     border_type='reflect')
stabilizer.stabilize(input_path='input_video.mov', 
                     output_path='rep_stable_video.avi', 
                     border_type='replicate')

border_size=0

border_size=100

border_type='reflect' border_type='replicate'

Video used with permission from HappyLiving

Using Frame Layering

from vidstab import VidStab, layer_overlay, layer_blend

# init vid stabilizer
stabilizer = VidStab()

# use vidstab.layer_overlay for generating a trail effect
stabilizer.stabilize(input_path=INPUT_VIDEO_PATH,
                     output_path='trail_stable_video.avi',
                     border_type='black',
                     border_size=100,
                     layer_func=layer_overlay)


# create custom overlay function
# here we use vidstab.layer_blend with custom alpha
#   layer_blend will generate a fading trail effect with some motion blur
def layer_custom(foreground, background):
    return layer_blend(foreground, background, foreground_alpha=.8)

# use custom overlay function
stabilizer.stabilize(input_path=INPUT_VIDEO_PATH,
                     output_path='blend_stable_video.avi',
                     border_type='black',
                     border_size=100,
                     layer_func=layer_custom)
layer_func=vidstab.layer_overlay layer_func=vidstab.layer_blend

Video used with permission from HappyLiving

Automatic border sizing

from vidstab import VidStab, layer_overlay

stabilizer = VidStab()

stabilizer.stabilize(input_path=INPUT_VIDEO_PATH,
                     output_path='auto_border_stable_video.avi', 
                     border_size='auto',
                     # frame layering to show performance of auto sizing
                     layer_func=layer_overlay)

Stabilizing a frame at a time

The method VidStab.stabilize_frame() can accept numpy arrays to allow stabilization processing a frame at a time. This can allow pre/post processing for each frame to be stabilized; see examples below.

Simplest form

from vidstab.VidStab import VidStab

stabilizer = VidStab()
vidcap = cv2.VideoCapture('input_video.mov')

while True:
     grabbed_frame, frame = vidcap.read()
     
     if frame is not None:
        # Perform any pre-processing of frame before stabilization here
        pass
     
     # Pass frame to stabilizer even if frame is None
     # stabilized_frame will be an all black frame until iteration 30
     stabilized_frame = stabilizer.stabilize_frame(input_frame=frame,
                                                   smoothing_window=30)
     if stabilized_frame is None:
         # There are no more frames available to stabilize
         break
     
     # Perform any post-processing of stabilized frame here
     pass

Example with object tracking

import os
import cv2
from vidstab import VidStab, layer_overlay, download_ostrich_video

# Download test video to stabilize
if not os.path.isfile("ostrich.mp4"):
    download_ostrich_video("ostrich.mp4")

# Initialize object tracker, stabilizer, and video reader
object_tracker = cv2.TrackerCSRT_create()
stabilizer = VidStab()
vidcap = cv2.VideoCapture("ostrich.mp4")

# Initialize bounding box for drawing rectangle around tracked object
object_bounding_box = None

while True:
    grabbed_frame, frame = vidcap.read()

    # Pass frame to stabilizer even if frame is None
    stabilized_frame = stabilizer.stabilize_frame(input_frame=frame, border_size=50)

    # If stabilized_frame is None then there are no frames left to process
    if stabilized_frame is None:
        break

    # Draw rectangle around tracked object if tracking has started
    if object_bounding_box is not None:
        success, object_bounding_box = object_tracker.update(stabilized_frame)

        if success:
            (x, y, w, h) = [int(v) for v in object_bounding_box]
            cv2.rectangle(stabilized_frame, (x, y), (x + w, y + h),
                          (0, 255, 0), 2)

    # Display stabilized output
    cv2.imshow('Frame', stabilized_frame)

    key = cv2.waitKey(5)

    # Select ROI for tracking and begin object tracking
    # Non-zero frame indicates stabilization process is warmed up
    if stabilized_frame.sum() > 0 and object_bounding_box is None:
        object_bounding_box = cv2.selectROI("Frame",
                                            stabilized_frame,
                                            fromCenter=False,
                                            showCrosshair=True)
        object_tracker.init(stabilized_frame, object_bounding_box)
    elif key == 27:
        break

vidcap.release()
cv2.destroyAllWindows()

Working with live video

The VidStab class can also process live video streams. The underlying video reader is cv2.VideoCapture(documentation). The relevant snippet from the documentation for stabilizing live video is:

Its argument can be either the device index or the name of a video file. Device index is just the number to specify which camera. Normally one camera will be connected (as in my case). So I simply pass 0 (or -1). You can select the second camera by passing 1 and so on.

The input_path argument of the VidStab.stabilize method can accept integers that will be passed directly to cv2.VideoCapture as a device index. You can also pass a device index to the --input argument for command line usage.

One notable difference between live feeds and video files is that webcam footage does not have a definite end point. The options for ending a live video stabilization are to set the max length using the max_frames argument or to manually stop the process by pressing the Esc key or the Q key. If max_frames is not provided then no progress bar can be displayed for live video stabilization processes.

Example

from vidstab import VidStab

stabilizer = VidStab()
stabilizer.stabilize(input_path=0,
                     output_path='stable_webcam.avi',
                     max_frames=1000,
                     playback=True)

Transform file writing & reading

Generating and saving transforms to file

import numpy as np
from vidstab import VidStab, download_ostrich_video

# Download video if needed
download_ostrich_video(INPUT_VIDEO_PATH)

# Generate transforms and save to TRANSFORMATIONS_PATH as csv (no headers)
stabilizer = VidStab()
stabilizer.gen_transforms(INPUT_VIDEO_PATH)
np.savetxt(TRANSFORMATIONS_PATH, stabilizer.transforms, delimiter=',')

File at TRANSFORMATIONS_PATH is of the form shown below. The 3 columns represent delta x, delta y, and delta angle respectively.

-9.249733913760086068e+01,2.953221378387767970e+01,-2.875918912994855636e-02
-8.801434576214279559e+01,2.741942225927152776e+01,-2.715232319470826938e-02

Reading and using transforms from file

Below example reads a file of transforms and applies to an arbitrary video. The transform file is of the form shown in above section.

import numpy as np
from vidstab import VidStab

# Read in csv transform data, of form (delta x, delta y, delta angle):
transforms = np.loadtxt(TRANSFORMATIONS_PATH, delimiter=',')

# Create stabilizer and supply numpy array of transforms
stabilizer = VidStab()
stabilizer.transforms = transforms

# Apply stabilizing transforms to INPUT_VIDEO_PATH and save to OUTPUT_VIDEO_PATH
stabilizer.apply_transforms(INPUT_VIDEO_PATH, OUTPUT_VIDEO_PATH)

More Repositories

1

r_regex_tester_app

Shiny Application to test regular expressions in R
R
59
star
2

ssbm_fox_detector

Keras object detector to detect Fox in Super Smash Bros Melee for the Nintendo Gamecube
Python
37
star
3

app_rasa_chat_bot

a stateless chat bot to perform natural language queries against the App Store top charts
Python
27
star
4

iphone_app_icon

playing with top chart app store icons
HTML
27
star
5

lexRankr

Extractive Text Summariztion with lexRankr (an R package implementing the LexRank algorithm)
R
21
star
6

pixel_art

Turn images into 'iconified' pixel art with Python and OpenCV
Python
16
star
7

minimal_python_package

A minimal introduction to writing packages with python.
Python
10
star
8

qa_query

Python
7
star
9

aws_python_messenger

Tutorial for making a facebook messenger chat bot using AWS Lambda and Python
Python
7
star
10

youtube_reaction_face

using python, opencv, & keras to pull reaction faces from youtube videos
Python
6
star
11

keras_image_r_python

comparing a keras image classification task done in R and python
R
5
star
12

syntax_net_stuff

exploring syntaxnet paired with neo4j for document level text analysis
Python
5
star
13

lens_blur

Python
5
star
14

snakeLoadR

small R package to add the snake game as a loader in a shiny app
R
4
star
15

minimal_sklearn_model_deploy

Simple flask + sklearn example to deploy a pickled estimator
Python
4
star
16

python_dev_survey_analysis

Analysis of Python Developers Survey 2019 Results collected by JetBrains
Jupyter Notebook
4
star
17

cv_plinko

Turn any image/video into a game of Plinko using Python & OpenCV
Python
4
star
18

trace_race_cv

Python
4
star
19

misc_code_fun

Miscellaneous smaller projects/scripts for fun
Python
4
star
20

intro_text_analytics_session

How to intro text analytics in 75 minutes?
HTML
3
star
21

wedding_ring_detector

Toy project to classify hand images as married or not married
Python
3
star
22

sql-racer

How fast can you name 10 SQL keywords? (R Shiny App)
R
2
star
23

tidytext_learning

playing with tidytext, fivethirtyeight data, & ggplot2 at Rstudio::Conf
R
2
star
24

vidstab_explained

Python
2
star
25

rPackedBar

Packed Bar Charts in R with Plotly (https://community.jmp.com/t5/JMP-Blog/Introducing-packed-bars-a-new-chart-form/ba-p/39972)
R
2
star
26

ramen_noodle_ratings

Analysis of ramen noodle rating data
Jupyter Notebook
2
star
27

kanyeText

analyzing kanye west lyrics with R
R
1
star
28

sketch_book

repo for quick p5js sketches
JavaScript
1
star
29

spotify_eda_2020

Downloaded my Spotify data for 2020. Let's explore.
1
star
30

cs224n

Jupyter Notebook
1
star
31

line_art

Python
1
star
32

class_fork_example

R
1
star
33

p5av_club

An applet to sync audio with different p5js animations
JavaScript
1
star
34

j-archive

j-archive scraper
HTML
1
star
35

stitcher

Experiment for panorama stitching without any directional assumptions.
Python
1
star
36

race-r

How fast can you name 10 R keywords? (R Shiny App)
R
1
star
37

ml_arms

đŸĻžs that teach themselves motor skills right before your 👀
JavaScript
1
star
38

shinyboggle

(hopefully) a quick morning shiny app implementation of word search
R
1
star
39

shiny_groom_proposal

R shiny web app for proposing to groomsmen
R
1
star
40

spotify_facial_expression

i have spotify listening data. i have facial expression data. is there anything interesting when combined?
Jupyter Notebook
1
star
41

pyimageconf_workshop_vr_and_ar

Python
1
star
42

art_with_R_holiday

A spinoff of https://adamspannbauer.github.io/art-with-R-intro/ but with more winter
HTML
1
star
43

timeseries-sp24

Notes and slides for an intro time series class spring 2024 (using R and fpp3 book + package)
1
star
44

DaShiny_comparison

Some comparisons of Dash and Shiny for making simple web apps with Python
Python
1
star
45

data_science_salary

Example supervised learning project on https://www.kaggle.com/andrewmvd/data-analyst-jobs
1
star
46

axi-draw-sketch-book

JavaScript
1
star
47

py_pic_particlizer

convert a pic into a particlized interactive animation (with games!)
Python
1
star
48

pyimageconf2018

repo for all things during the conference
1
star
49

fractal_tree

JavaScript
1
star
50

datatable_and_dplyr

Comparison of operations done in `dplyr` and `data.table`
HTML
1
star
51

mit6034

HTML
1
star
52

flixable_ml_dsi

Group project with Thinkful DSI Cohort 2
Jupyter Notebook
1
star
53

discord.r

WIP: Plans to be the R package for interacting with discord
1
star
54

p5_portfolio

A quickly hacked together p5js portfolio based on https://github.com/github/personal-website
JavaScript
1
star
55

pca_animation

(WIP) An interactive animation to demonstrate the process of Principal Components Analysis
JavaScript
1
star
56

r_generative_contour

Generative art contour plots with Shiny and ggplot2!
R
1
star
57

nlhp

Random NLP tasks with Harry Potter text
Jupyter Notebook
1
star