• Stars
    star
    128
  • Rank 279,429 (Top 6 %)
  • Language
    Python
  • Created almost 7 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A tool to describe the content of videos and suggest similar scenes in other videos/films.

Scenescoop

Scenescoop is a tool to get similar semantic scenes from a pair of videos. Basically, you input a video and get a scene that has a similar meaning in another video. You can run it as a python script or as a web app.

description

How it works

Scenescoop uses the im2text tensorflow model to analyze videos on a frame to frames basis and get a description of the content of those images. Frames with the same description are grouped together to create a sequence or scene.

Scenes are then analyzed with spaCy, for sentence parsing and built-in word vectors, using the average of the word vectors in the sentence.

Annoy is finally used to create an index for fast nearest-neighbor lookup (based on @aparrish Plot to poem)

This project is inspired by Thingscoop.

Video Demos

A man sitting at a table with a plate of food

A man sitting at a table with a plate of food

A group of people walking down the street

A group of people walking down the street

Usage

To run this you'll need to install a few dependencies. You can follow the original repository or the instructions Edouard Fouché wrote. (I plan to write a step-by-step guide on how to install everything)

You can also get the pretrained model I'm using here.

Once everything is installed, clone the repo and install the project dependencies:

git clone https://github.com/cvalenzuela/scenescoop.git
cd scenescoop
pip install -r requirements.txt

You can then run Scenescoop in two modes:

1) Frame Analysis Mode

Given a video file --video (.mp4, .avi, .mkv or .mov), this will analyse the file frame by frame and output a .json file containing the descriptions of the those frames. The --name argument should be the output name of the transcript.

Example:

python scenescoop.py --video videos/moonrisekingdom.mp4 --name moonrisekingdom

The .json file should look something like this:

{ 
...
"a person is taking a picture of themselves in a mirror ": [4834], 
"a man sitting in the back of a pickup truck ": [2265, 2266], 
"a man sitting on a bench in front of a building ": [1935, 1937, 
1938, 3950, 3951, 3952, 3953, 3960, 4072, 4073, 4074, 4075, 
4077, 4079, 4080, 4082, 4115, 4467], 
"a man standing next to a tree holding a surfboard ": [2470]
...
}

2) Transfer Mode

Two videos are required for this mode and both should have their corresponding transcript.json file created in the Frame Analysis Mode.

The --input_data argument should be the .json file containing the data for the input video and --transform_data is the .json file for the transfer video. --input_seconds is the input time frame to transfer and --transform_src is the video source of the transfer video.

Example:

python scenescoop.py --input_data transcripts/street.json --input_seconds 0,5 --transform_src videos/her.avi --transform_data transcripts/her.json

You can print all options with python scenescoop.py -h:

usage: scenescoop.py [-h] [--video VIDEO] [--name NAME]
                     [--input_data INPUT_DATA] [--input_seconds INPUT_SECONDS]
                     [--transform_src TRANSFORM_SRC]
                     [--transform_data TRANSFORM_DATA] [--api API]

Storiescoop

optional arguments:
  -h, --help            show this help message and exit
  --video VIDEO         Video Source to transform
  --name NAME           Name of the video
  --input_data INPUT_DATA
                        Input Video. Must be a json file.
  --input_seconds INPUT_SECONDS
                        Input Video Seconds to create transformation. Example:
                        1,30
  --transform_src TRANSFORM_SRC
                        Transform Video Source.
  --transform_data TRANSFORM_DATA
                        Transform Video Data. Must be a json file.
  --api API             API Request

Web App

You can also launch an interactive web app, using a flask server, to run the Frame Analysis Mode and Transfer Mode in a webpage. You'll still need all the dependencies installed.

description

To run the app in a local server:

python server.py

The visit localhost:8080.

To modify the source code:

cd static
yarn watch

MMS

Local development of the MMS application:

Start ngrok

./ngrok http 7676

Configure the url in Twilio and in the server in NGROK_URL

Start the Redis server

redis-server

Start the Celery worker:

celery -A server.celery worker

Finally start the server

python server.py

License

MIT

More Repositories

1

Mappa

A canvas wrapper for Maps 🗺 🌍
JavaScript
359
star
2

hpc

A quick reference to access NYU High Performance Computing
117
star
3

Selected_Stories

An experimental web text editor that runs a LSTM model while you write to suggest new lines
JavaScript
40
star
4

sequential-stories

Using Tensorflow's im2txt model to generate stories in an iOS app.
Objective-C
21
star
5

p5deeplearn

deeplearn.js meets p5
Jupyter Notebook
19
star
6

carbon

Watch local files for changes and share them with the world 🌎
JavaScript
13
star
7

runway_workshop_itpcamp

RunwayML workshop @ ITP Camp 2018
JavaScript
10
star
8

gpt2-slack-bot

A GPT-2 Slack Bot with RunwayML's hosted models.
JavaScript
8
star
9

Trade-Flow

Trade Flow | Visualize and Listen to Economic Trade Data.
JavaScript
8
star
10

deeplearn-chrome_extension

JavaScript
8
star
11

sidewalk_orchestra

An experimental musical app using pose estimation of a live sidewalk video stream.
JavaScript
7
star
12

git-cheatsheet

Git cheatsheet
7
star
13

sfpc

Machine Learning Literacy Workshop @SFPC
JavaScript
6
star
14

rwet

This repo contains work and assignments for the class Reading and Writing Electronic Text
JavaScript
6
star
15

psnotify

A small library that uses Twilio to send an SMS when a job finishes running in Paperspace.
JavaScript
5
star
16

ml5_KNN_example

ml5 KNN Example for Eyebeam
JavaScript
4
star
17

bode.ga

A 24-hour web documentary that captures the everyday interactions and transactions of a small bodega in Queens, NY.
JavaScript
3
star
18

lstm_training

Python
3
star
19

alt_docs

JavaScript
2
star
20

polybius

Save webpages and compare them over time
JavaScript
2
star
21

automating_video

Python
2
star
22

PGAN_Runway

Progressive Growing of GANs (PGAN) ported to Runway
Python
2
star
23

google-autocompleteme

Let Google complete you
Python
2
star
24

interactive-music

JavaScript
2
star
25

Traceroute-your-history

JavaScript
1
star
26

understanding-networks

JavaScript
1
star
27

live-web

JavaScript
1
star
28

usermanual

Generate a pdf file with a set of instructions detailing your daily computer activity in a series of steps
Jupyter Notebook
1
star
29

designexpo

C#
1
star
30

javascript_cheatsheet

Simple Cheatsheet for .js
JavaScript
1
star
31

Nutrition-Facts

Nutrition Fact, a Chrome Extension: get to know the content of the websites you visit.
JavaScript
1
star
32

satellite-alphabet

Satellite based typography
JavaScript
1
star
33

Data-and-Digital-Mapping

This repo contains work and assignments for the class Everything is Spatial: Data and Digital Mapping taught by Mimi Onuoha @ ITP NYU Spring 2017.
JavaScript
1
star