• Stars
    star
    1,318
  • Rank 35,409 (Top 0.8 %)
  • Language
    JavaScript
  • License
    Apache License 2.0
  • Created almost 9 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

An experimental HTML5 & WebGL video composition and rendering API.

VideoContext

build status

The VideoContext is an experimental HTML5/WebGL media processing and sequencing library for creating interactive and responsive videos on the web.

It consists of two main components. A graph based, shader accelerated processing pipeline, and a media playback sequencing timeline.

The design is heavily inspired by the Web Audio API, so it should feel familiar for people with experience in the Web Audio world.

Live examples can be found here

Table of Contents

Demo

View on CodeSandbox.

<!DOCTYPE html>
<html>
<head>
    <title></title>
    <script type="text/javascript" src="../dist/videocontext.js"></script>
</head>
<body>
    <!--
        A canvas needs to define its width and height to know how many pixels you can draw onto it.
        Its CSS width and height will define the space it takes on screen
        If omitted, the canvas dimensions will be 300x150 and your videos will not rendered at their
        optimal definition
        https://webglfundamentals.org/webgl/lessons/webgl-resizing-the-canvas.html
    -->
    <canvas id="canvas" width="1280" height="720" style="width: 852px; height: 480px"></canvas>

    <script type="text/javascript">
        window.onload = function(){
            var canvas = document.getElementById("canvas");

            var videoCtx = new VideoContext(canvas);
            var videoNode1 = videoCtx.video("./video1.mp4");
            videoNode1.start(0);
            videoNode1.stop(4);

            var videoNode2 = videoCtx.video("./video2.mp4");
            videoNode2.start(2);
            videoNode2.stop(6);

            var crossFade = videoCtx.transition(VideoContext.DEFINITIONS.CROSSFADE);
            crossFade.transition(2,4,0.0,1.0, "mix");

            videoNode1.connect(crossFade);
            videoNode2.connect(crossFade);
            crossFade.connect(videoCtx.destination);


            videoCtx.play();
        };
    </script>
</body>
</html>

Graph and timeline view

Debugging

If you need to debug video context graphs or get a better insight into what is happening under the hood, there's a new browser extension for chrome, videocontext-devtools

Debugging view

Documentation

API Documentation can be built using ESDoc by running the following commands:

yarn install
yarn doc

The documentation will be generated in the "./doc" folder of the repository.

Node Types

There are a number of different types of nodes which can be used in the VideoContext's processing graph. Here's a quick list of each one. Following that is a more in-depth discussion of each type.

  • VideoNode - Plays video.
  • AudioNode - Plays audio.
  • ImageNode - Displays images for specified time.
  • CanvasNode - Displays output of canvas for specified time.
  • EffectNode - Applies shader to limited number of inputs.
  • TransitionNode - Applies shader to limited number of inputs. Modifies properties at specific times.
  • CompositingNode - Applies same shader to unlimited inputs, rendering to same output.
  • DestinationNode - Node representing output canvas. Can only be one.

VideoNode

A video source node.

View on CodeSandbox.

var videoNode = videoCtx.video("./video1.mp4");
videoNode.connect(videoCtx.destination);
videoNode.start(0);
videoNode.stop(4);

For best results the video played by a VideoNode should be encoded with a fast decode profile. The following avconv line shows how this can be achieved.

avconv -i input.mp4 -tune fastdecode -strict experimental output.mp4

AudioNode

An audio source node.

View on CodeSandbox.

var audioNode = videoCtx.audio("./audio.mp3");
audioNode.connect(videoCtx.destination);
audioNode.start(0);
audioNode.stop(4);

ImageNode

An image source node.

View on CodeSandbox.

var imageNode = videoCtx.image("cats.png");
imageNode.connect(videoCtx.destination);
imageNode.start(0);
imageNode.stop(4);

CanvasNode

A canvas source node.

View on CodeSandbox.

var canvas = document.getElementById("input-canvas");
var canvasNode = videoCtx.canvas(canvas);
canvasNode.connect(videoCtx.destination);
canvasNode.start(0);
canvasNode.stop(4);

CustomSourceNode

Sometimes using the pre-built node is just not enough. You can create your own source nodes that host more logic and let you hook into the VideoContext Node API easily.

The following example shows how to create a custom source node that can play a HLS VOD:

View on CodeSandbox

import Hls from "hls.js";

class HLSNode extends VideoContext.NODES.VideoNode {
  constructor(src, gl, renderGraph, currentTime, playbackRate, sourceOffset, preloadTime, hlsOptions = {}) {
    //Create a video element.
    const video = document.createElement("video");

    super(video, gl, renderGraph, currentTime, playbackRate, sourceOffset, preloadTime);

    //Create a HLS object.
    this.hls = new Hls(hlsOptions);

    //Bind the video element.
    this.hls.attachMedia(video);

    //Set the source path.
    this._src = src;

    this._displayName = "HLSNode";
    this._elementType = "hls";
  }

  _load() {
    //Load the video source on first load.
    if (!this._loadTriggered) {
      this.hls.loadSource(this._src);
    }
    super._load();
  }

  destroy() {
    if (this.hls) {
      this.hls.destroy();
    }
    super.destroy();
  }
}

//Setup the video context.
const canvas = document.getElementById("canvas");
const ctx = new VideoContext(canvas);

//Create a custom HLS source node and play it for 60 seconds.
const hlsNode = ctx.customSourceNode(HLSNode, "https://video-dev.github.io/streams/x36xhzz/x36xhzz.m3u8");
hlsNode.start(0);
hlsNode.stop(60);

//Set-up the processing chain.
hlsNode.connect(ctx.destination);

//start playback.
ctx.play();

Another use case for custom node types would be to play GIFs. The custom node would be in charge of decode the GIF frames and paint them on a canvas depending on the _update calls from VideoContext.

EffectNode

An EffectNode is the simplest form of processing node. It's built from a definition object, which is a combination of fragment shader code, vertex shader code, input descriptions, and property descriptions. There are a number of common operations available as node descriptions accessible as static properties on the VideoContext at VideoContext.DEFINITIONS.

The vertex and shader code is GLSL code which gets compiled to produce the shader program. The input description tells the VideoContext how many ports there are to connect to and the name of the image associated with the port within the shader code. Inputs are always render-able textures (i.e images, videos, canvases). The property descriptions tell the VideoContext what controls to attached to the EffectNode and the name, type, and default value of the control within the shader code.

The following is a an example of a simple shader description used to describe a monochrome effect. It has one input (the image to be processed) and two modifiable properties to control the color RGB mix for the processing result.

View on CodeSandbox.

var monochromeDescription = {
    title:"Monochrome",
    description: "Change images to a single chroma (e.g can be used to make a black & white filter). Input color mix and output color mix can be adjusted.",
    vertexShader : `
        attribute vec2 a_position;
        attribute vec2 a_texCoord;
        varying vec2 v_texCoord;
        void main() {
            gl_Position = vec4(vec2(2.0,2.0)*a_position-vec2(1.0, 1.0), 0.0, 1.0);
            v_texCoord = a_texCoord;
        }`,
    fragmentShader : `
        precision mediump float;
        uniform sampler2D u_image;
        uniform vec3 inputMix;
        uniform vec3 outputMix;
        varying vec2 v_texCoord;
        varying float v_mix;
        void main(){
            vec4 color = texture2D(u_image, v_texCoord);
            float mono = color[0]*inputMix[0] + color[1]*inputMix[1] + color[2]*inputMix[2];
            color[0] = mono * outputMix[0];
            color[1] = mono * outputMix[1];
            color[2] = mono * outputMix[2];
            gl_FragColor = color;
        }`,
    properties:{
        "inputMix":{type:"uniform", value:[0.4,0.6,0.2]},
        "outputMix":{type:"uniform", value:[1.0,1.0,1.0]}
    },
    inputs:["u_image"]
};

Here's an example of how the above node description might be used to apply sepia like effect to a video.

//Setup the video context.
var canvas = document.getElementById("canvas");
var ctx = new VideoContext(canvas);

//Create a video node and play it for 60 seconds.
var videoNode = ctx.video("./video.mp4");
videoNode.start(0);
videoNode.stop(60);

//Create the sepia effect node (from the above Monochrome effect description).
var sepiaEffect = ctx.effect(monochromeDescription);

//Give a sepia tint to the monochrome output (note how shader description properties are automatically bound to the JavaScript object).
sepiaEffect.outputMix = [1.25,1.18,0.9];

//Set-up the processing chain.
videoNode.connect(sepiaEffect);
sepiaEffect.connect(ctx.destination);

//start playback.
ctx.play();

TransitionNode

Transition nodes are a type of effect node which allow the automatic modification/tweening of properties in relation to the VideoContexts notion of time. In every respect they are the same as an effect node except they have a "transition" function which can be used to cue the transitioning of a shader property from one value to another.

You can use them to perform a video transition effect (such as cross-fades, wipes, etc) by creating a definition with two inputs and having a property which controls the mix of the two inputs in the output buffer.

The following is an example of a simple cross-fade shader.

View on CodeSandbox.

var crossfadeDescription = {
    title:"Cross-Fade",
    description: "A cross-fade effect. Typically used as a transistion.",
    vertexShader : `
            attribute vec2 a_position;
            attribute vec2 a_texCoord;
            varying vec2 v_texCoord;
            void main() {
                gl_Position = vec4(vec2(2.0,2.0)*a_position-vec2(1.0, 1.0), 0.0, 1.0);
                v_texCoord = a_texCoord;
            }`,
        fragmentShader : `
            precision mediump float;
            uniform sampler2D u_image_a;
            uniform sampler2D u_image_b;
            uniform float mix;
            varying vec2 v_texCoord;
            varying float v_mix;
            void main(){
                vec4 color_a = texture2D(u_image_a, v_texCoord);
                vec4 color_b = texture2D(u_image_b, v_texCoord);
                color_a[0] *= mix;
                color_a[1] *= mix;
                color_a[2] *= mix;
                color_a[3] *= mix;
                color_b[0] *= (1.0 - mix);
                color_b[1] *= (1.0 - mix);
                color_b[2] *= (1.0 - mix);
                color_b[3] *= (1.0 - mix);
                gl_FragColor = color_a + color_b;
            }`,
        properties:{
            "mix":{type:"uniform", value:0.0}
        },
        inputs:["u_image_a","u_image_b"]
};

The shader has two inputs and a mix property.

//Setup the video context.
var canvas = document.getElementById("canvas");
var ctx = new VideoContext(canvas);

//Create a video node that plays for 10 seconds from time=0.
var videoNode1 = ctx.video("./video1.mp4");
videoNode1.start(0);
videoNode1.stop(10);

//Create a video node that plays for 10 seconds from time=8, overlapping videoNode1 by two seconds.
var videoNode2 = ctx.video("./video2.mp4");
videoNode2.start(8);
videoNode2.stop(18);

//Create the sepia effect node (from the above Monochrome effect description).
var crossfadeEffect = ctx.transition(crossfadeDescription);

//Setup the transition. This will change the "mix" property of the cross-fade node from 0.0 to 1.0.
//Transision mix value from 0.0 to 1.0 at time=8 over a period of 2 seconds to time=10.
crossfadeEffect.transition(8.0, 10.0, 0.0, 1.0, "mix");


//Set-up the processing chain.
videoNode1.connect(crossfadeEffect); //this will connect videoNode1 to the "image_a" input of the processing node
videoNode2.connect(crossfadeEffect); //this will connect videoNode2 to the "image_b" input of the processing node


// NOTE: There's multiple ways to connect a node to specific input of a processing node, the
// following are all equivalent.
//
// By default behavior:
// videoNode1.connect(crossfadeEffect);
// videoNode2.connect(crossfadeEffect);
//
// By named input port:
// videoNode1.connect(crossfadeEffect, "image_a");
// videoNode2.connect(crossfadeEffect, "image_b");
//
// By input port index:
// videoNode1.connect(crossfadeEffect, 0);
// videoNode2.connect(crossfadeEffect, 1);


crossfadeEffect.connect(ctx.destination);

//start playback.
ctx.play();

CompositingNode

Compositing nodes are different from regular effect nodes as they can have an infinite number of nodes connected to them. They operate by running their effect shader on each connected input in turn and rendering the output to the same texture. This makes them particularly suitable for layering inputs which have alpha channels.

When compositing nodes are run, they map each input in turn to the first input in the definition. This means compositing node definitions typically only have a single input defined. It's also worth noting that an effect node definition with a single input can also be used as a compositing shader with no additional modifications.

A common use for compositing nodes is to collect a series of source nodes which exist at distinct points on a timeline into a single connection for passing onto further processing. This effectively makes the sources into a single video track.

Here's a really simple shader which renders all the inputs to the same output.

View on CodeSandbox.

var combineDecription ={
    title:"Combine",
    description: "A basic effect which renders the input to the output, Typically used as a combine node for layering up media with alpha transparency.",
    vertexShader : `
        attribute vec2 a_position;
        attribute vec2 a_texCoord;
        varying vec2 v_texCoord;
        void main() {
            gl_Position = vec4(vec2(2.0,2.0)*a_position-vec2(1.0, 1.0), 0.0, 1.0);
            v_texCoord = a_texCoord;
        }`,
    fragmentShader : `
        precision mediump float;
        uniform sampler2D u_image;
        varying vec2 v_texCoord;
        varying float v_mix;
        void main(){
            vec4 color = texture2D(u_image, v_texCoord);
            gl_FragColor = color;
        }`,
    properties:{
    },
    inputs:["u_image"]
};

And here's an example of how it can be used.

//Setup the video context.
var canvas = document.getElementById("canvas");
var ctx = new VideoContext(canvas);

//Create a video node that plays for 10 seconds from time=0.
var videoNode1 = ctx.video("./video1.mp4");
videoNode1.start(0);
videoNode1.stop(10);

//Create a video node that plays for 5 seconds from time=10.
var videoNode2 = ctx.video("./video2.mp4");
videoNode2.start(10);
videoNode2.stop(15);

//Create a video node that plays for 12 seconds from time=15.
var videoNode3 = ctx.video("./video3.mp4");
videoNode3.start(15);
videoNode3.stop(27);

//Create the combine compositing node (from the above Combine effect description).
var combineEffect = ctx.compositor(combineDecription);

//Connect all the videos to the combine effect. Collecting them together into a single point which can be connected to further points in the graph. (Making something logically equivalent to a track.)
videoNode1.connect(combineEffect);
videoNode2.connect(combineEffect);
videoNode3.connect(combineEffect);

//Connect all the input sources to the destination.
combineEffect.connect(ctx.destination);

//start playback.
ctx.play();

Writing Custom Effect Definitions

Making custom effect shaders for the VideoContext is fairly simple. The best starting point is to take one of the built in effects and modify it. It's very useful to have an understanding of how shaders work and some experience writing shaders in GLSL.

var effectDefinition ={
    title:"",               //A title for the effect.
    description: "",        //A textual description of what the effect does.
    vertexShader : "",      //The vertex shader
    fragmentShader : "",    //The fragment shader
    properties:{            //An object containing uniforms from the fragment shader for mapping onto the effect node.
    },
    inputs:["u_image"]      //the names of the uniform sampler2D's in the fragment shader which represent the texture inputs to the effect.
};

Advanced Examples

You can view more advanced usage examples here.

Development

VideoContext has a pretty standard package.json

# install build and development dependencies
yarn install

# run a dev server with automatic reload
yarn dev

# watch unit and integration tests
yarn test-watch

# run the end-to-end regression tests in a headless browser
yarn cypress

For more information on writing, running and debugging the end-to-end cypress tests see ./test/cypress#readme.

For an overview of all testing see ./test#readme

Gitflow

VideoContext uses the gitflow branching model. To contribute raise a pull request against the develop branch.

Releases

Releases are prepared in release branches. When the the release is ready run one of

yarn release:major
yarn release:minor
yarn release:patch

these scripts build and commit the docs, the changelog, update the package.json version number and push to the current branch with tags.

CI will publish to npm when the release branch has been merged into master.

Release step-by-step

  1. git checkout develop
  2. git pull
  3. git checkout -b release-xxx
  4. tag and push using script
    • yarn release:patch|minor|major
  5. open pull request against master
  6. merge when tests have passed
  7. merge master back in to develop:
    • git checkout master
    • git pull
    • git checkout develop
    • git merge master
    • git push

There is one housekeeping task (this will be automated at some point):

  1. update the codesandbox examples to use the latest release

CI

VideoContext uses the BBCs public travis account to run all tests and publish to npmjs. All tests must pass before PRs can be merged.

Other options

yarn build     # build dist packages
yarn doc       # create documentation
yarn build_all # do all of the above

The library is written in es6 and cross-compiled using babel.

More Repositories

1

wraith

Wraith β€” A responsive screenshot comparison tool
Ruby
4,813
star
2

Imager.js

Responsive images while we wait for srcset to finish cooking
JavaScript
3,833
star
3

peaks.js

JavaScript UI component for interacting with audio waveforms
JavaScript
2,886
star
4

audiowaveform

C++ program to generate waveform data and render waveform images from audio files
C++
1,658
star
5

sqs-consumer

Build Amazon Simple Queue Service (SQS) based applications without the boilerplate
TypeScript
1,541
star
6

bbplot

R package that helps create and export ggplot2 charts in the style used by the BBC News data team
R
1,434
star
7

simorgh

The BBC's Open Source Web Application. Contributions welcome! Used on some of our biggest websites, e.g.
JavaScript
1,283
star
8

waveform-data.js

Audio Waveform Data Manipulation API – resample, offset and segment waveform data in JavaScript.
JavaScript
936
star
9

brave

Basic Real-time AV Editor - allowing you to preview, mix, and route live audio and video streams on the cloud
Python
646
star
10

tal

TV Application Layer
JavaScript
550
star
11

react-transcript-editor

A React component to make correcting automated transcriptions of audio and video easier and faster. By BBC News Labs. - Work in progress
JavaScript
494
star
12

psammead

React component library for BBC World Service and more
JavaScript
320
star
13

newslabs-datastringer

Monitor datasets, gets alerts when something happens
JavaScript
212
star
14

html5-video-compositor

This is the BBC Research & Development UX Team's experimental shader based video composition engine for the browser. For new projects please consider using or new VideoContext library https://github.com/bbc/videocontext .
JavaScript
207
star
15

REST-API-example

Simple REST API example in Sinatra
Ruby
193
star
16

grandstand

BBC Grandstand is a collection of common CSS abstractions and utility helper classes
SCSS
190
star
17

sqs-producer

Simple scaffolding for applications that produce SQS messages
TypeScript
181
star
18

r-audio

A library of React components for building Web Audio graphs.
JavaScript
168
star
19

chaos-lambda

Randomly terminate ASG instances during business hours
Python
163
star
20

turingcodec

Source code for the Turing codec, an HEVC software encoder optimised for fast encoding of large resolution video content
C++
153
star
21

bbc-vamp-plugins

A collection of audio feature extraction algorithms written in the Vamp plugin format.
C++
152
star
22

bbc-a11y

BBC Accessibility Guidelines Checker
Gherkin
134
star
23

rcookbook

Reference manual for creating BBC-style graphics using the BBC's bbplot package built on top of R's ggplot2 library
HTML
127
star
24

gel-grid

A flexible code implementation of the GEL Grid Guidelines
SCSS
126
star
25

audio-offset-finder

Find the offset of an audio file within another audio file
Python
124
star
26

datalab-ml-training

Machine Learning Training
Jupyter Notebook
117
star
27

Similarity

Calculate similarity between documents using TF-IDF weights
Ruby
115
star
28

viewporter

In-browser responsive testing tool.
CSS
114
star
29

flashheart

A fully-featured Node.js REST client built for ease-of-use and resilience
JavaScript
114
star
30

qtff-parameter-editor

QuickTime file parameter editor for modifying transfer function, colour primary and matrix characteristics
C++
114
star
31

gel-typography

A flexible code implementation of the GEL Typography Guidelines
CSS
111
star
32

consumer-contracts

Consumer-driven contracts in JavaScript
JavaScript
105
star
33

color-contrast-checker

An accessibility checker tool for validating the color contrast based on WCAG 2.0 and WCAG 2.1 standards.
JavaScript
81
star
34

slayer

JavaScript time series spike detection for Node.js and the browser; like the Octave findpeaks function.
JavaScript
77
star
35

lrud

Left, Right, Up, Down. A spatial navigation library for devices with input via directional controls.
JavaScript
76
star
36

audio_waveform-ruby

Ruby gem that provides access to audio waveform data files generated by audiowaveform
Ruby
76
star
37

nghq

An implementation of Multicast QUIC https://tools.ietf.org/html/draft-pardue-quic-http-mcast-07
C
67
star
38

software-engineering-technical-assessments

Technical assessment for hiring
Kotlin
67
star
39

bigscreen-player

Simplified media playback for bigscreen devices
JavaScript
65
star
40

speculate

Automatically generates an RPM Spec file for your Node.js project
JavaScript
64
star
41

zeitgeist

Twitter Zeitgeist
Ruby
62
star
42

wally

Cucumber feature viewer and navigator
Ruby
57
star
43

theano-bpr

An implementation of Bayesian Personalised Ranking in Theano
Python
54
star
44

ShouldIT

A language agnostic BDD framework.
JavaScript
53
star
45

news-gem-cloudwatch-sender

Send metrics to InfluxDB from Cloudwatch
Ruby
53
star
46

unicode-bidirectional

A Javascript implementation of the Unicode 9.0.0 Bidirectional Algorithm
JavaScript
45
star
47

subtitles-generator

A node module to generate subtitles by segmenting a list of time-coded text - BBC News Labs
JavaScript
44
star
48

accessibility-news-and-you

We want to be the most accessible news website in the world. This is how.
HTML
44
star
49

codext

VS Code's editor shipped as a browser extension.
JavaScript
42
star
50

talexample

An example TV app written using TAL
JavaScript
40
star
51

rdfspace

RDFSpace constructs a vector space from any RDF dataset which can be used for computing similarities between resources in that dataset.
Python
39
star
52

digital-paper-edit-client

Work in progress - BBC News Labs digital paper edit project - React Client
JavaScript
39
star
53

clientside-recommender

A client-side recommender system implemented in Javascript.
Java
39
star
54

gel

JavaScript
39
star
55

childrens-games-starter-pack

This is the Starter Pack for Children's games, containing everything a games developer might need to start building an HTML5 game for Children's BBC. Every game should be forked into a new repository from this repo.
JavaScript
38
star
56

alephant

The Alephant framework is a collection of isolated Ruby gems, which interconnect to offer powerful message passing functionality built up around the "Broker" pattern.
Ruby
37
star
57

vc2-reference

A reference encoder and decoder for SMPTE ST 2042-1 "VC-2 Video Compression"
C++
34
star
58

ruby-lsh

Locality Sensitive Hashing in Ruby
Ruby
32
star
59

Strophejs-PubSub-Demo

A simple demo of Publish/Subscribe in the browser using Strophe.js
JavaScript
31
star
60

diarize-jruby

A simple toolkit for speaker segmentation and identification
Ruby
30
star
61

pydvbcss

Python library that implements DVB protocols for companion synchronisation
Python
28
star
62

gel-sass-tools

A collection of Sass Settings & Tools which align to key GEL values
SCSS
27
star
63

a11y-tests-web

Runs automated accessibility tests against configurable lists of webpages
JavaScript
27
star
64

RadioVisDemo

RadioDNS and RadioVIS Slideshow Protocol Demo
Python
27
star
65

device-discovery-pairing

Analysis and background research on discovery and pairing for the MediaScape project
26
star
66

lrud-spatial

Left, Right, Up, Down. A spatial navigation library for devices with input via directional controls.
JavaScript
26
star
67

node-canvas-lambda-deps

Node Canvas AWS Lambda dependencies i.e. compiled shared object files for Cairo, Pixman, libpng, libjpeg etc.
JavaScript
26
star
68

clever-thumbnailer

Audio thumbnail generator
C
25
star
69

spassky

Distributed web testing tool
JavaScript
25
star
70

bbc-speech-segmenter

A complete speech segmentation system using Kaldi and x-vectors for voice activity detection (VAD) and speaker diarisation.
Shell
24
star
71

genie

BBC Genie Games Framework
JavaScript
24
star
72

media-sequence

HTML5 media sequenced playback API: play one or multiple sequences of a same audio or video with plain JavaScript.
JavaScript
24
star
73

Chart.Bands.js

Chart.js plugin to allow banding on a chart
JavaScript
23
star
74

newslabs-Text_Analytics

A space for code and projects around analysing news content
Python
23
star
75

curriculum-data

BBC Curriculum Instance Data
23
star
76

videocontext-devtools

Chrome DevTools extension for easy VideoContext debugging.
JavaScript
22
star
77

bmx

Library and utilities to read and write broadcasting media files. Primarily supports the MXF file format
C++
22
star
78

adaptivepodcasting

A project exploring the potential of media which adapts based on sensors and data
JavaScript
21
star
79

cloudflare-queue-consumer

Build Cloudflare Queues based applications without the boilerplate (based on SQS Consumer)
TypeScript
21
star
80

UCMythTV

A full implementation of Universal Control 0.6.0 for use on a computer running Mythbuntu with a slightly modified version of MythTV (patches and configure script included).
Python
20
star
81

rdfsim

Large RDF hierarchies as vector spaces
Python
20
star
82

digital-paper-edit-electron

Work in progress - BBC News Labs digital paper edit project - Electron, Cross Platform Desktop app - Mac, Windows, Linux
C++
20
star
83

gst-ttml-subtitles

Library and elements that add support for TTML subtitles to GStreamer.
C
19
star
84

bug

Started life at BBC News - BUG enables control and monitoring of broadcast kit from a single web interface.
JavaScript
19
star
85

dvbcss-synctiming

Measuring synchronisation timing accuracy for DVB Compainion Screen Synchronisation TVs and Companions
Python
19
star
86

fcpx-xml-composer

Work in progress - Module to Convert a json sequence into an FCPX XML. For BBC News Labs digital paper edit project
JavaScript
18
star
87

bbcrd-brirs

An impulse response dataset for dynamic data-based auralisation of advanced sound systems
Common Lisp
18
star
88

MiD

Make it Digital: the BBC's Digital Creativity initiative
Arduino
17
star
89

device_api-android

DeviceAPI-Android
Ruby
17
star
90

gs-sass-tools

A collection of Sass variables, functions and mixins, part of BBC Grandstand
CSS
16
star
91

enzyme-adapter-inferno

Inferno enzyme adapter
JavaScript
16
star
92

get-title

Extract the best title value from within HTML head elements.
JavaScript
16
star
93

morty-docs

Generate a static website from markdown files
JavaScript
16
star
94

storyplayer

BBC Research & Development's Object Based Media Player
TypeScript
15
star
95

dialogger

Text-based media editing interface
JavaScript
15
star
96

bbcat-base

Base library for the BBC Audio Toolbox
C++
15
star
97

origin_simulator

A tool to simulate a (flaky) upstream origin during load and stress tests.
Elixir
15
star
98

catflap-camera

Raspberry Pi based catflap-triggered camera. As seen on TV.
Python
15
star
99

citron

Citron is an experimental quote extraction system created by BBC R&D
Python
15
star
100

crimp

Creating an md5 hash of a number, string, array, or hash in Ruby
Ruby
15
star