• This repository has been archived on 13/Aug/2024
  • Stars
    star
    151
  • Rank 245,163 (Top 5 %)
  • Language
    Scala
  • Created almost 10 years ago
  • Updated about 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

JSONs -> JSON Schema

Schema Guru

[ ![Build Status] travis-image ] travis [ ![Release] release-image ] releases [ License license-image ] license

Schema Guru is a tool (CLI, Spark job and web) allowing you to derive [JSON Schemas] json-schema from a set of JSON instances process and transform it into different data definition formats.

Current primary features include:

  • deriviation of JSON Schema from set of JSON instances (schema command)
  • generation of Redshift redshift table DDL and JSONPaths file (ddl command)

Unlike other tools for deriving JSON Schemas, Schema Guru allows you to derive schema from an unlimited set of instances (making schemas much more precise), and supports many more JSON Schema validation properties.

Schema Guru is used heavily in association with Snowplow's own Snowplow snowplow, Iglu iglu and [Schema DDL] schema-ddl projects.

User Quickstart

Download the latest Schema Guru from Bintray:

$ wget http://dl.bintray.com/snowplow/snowplow-generic/schema_guru_0.6.2.zip
$ unzip schema_guru_0.6.2.zip

Assuming you have a recent JVM installed.

CLI

Schema derivation

You can use as input either single JSON file or directory with JSON instances (it will be processed recursively).

Following command will print JSON Schema to stdout:

$ ./schema-guru-0.6.2 schema {{input}}

Also you can specify output file for your schema:

$ ./schema-guru-0.6.2 schema --output {{json_schema_file}} {{input}}

You can also switch Schema Guru into NDJSON ndjson mode, where it will look for newline delimited JSONs:

$ ./schema-guru-0.6.2 schema --ndjson {{input}}

You can specify the enum cardinality tolerance for your fields. It means that all fields which are found to have less than the specified cardinality will be specified in the JSON Schema using the enum property.

$ ./schema-guru-0.6.2 schema --enum 5 {{input}}

If you know that some particular set of values can appear, but don't want to set big enum cardinality, you may want to specify predefined enum set with --enum-sets multioption, like this:

$ ./schema-guru-0.6.2 schema --enum-sets iso_4217 --enum-sets iso_3166-1_aplha-3 /path/to/instances

Currently Schema Guru includes following built-in enum sets (written as they should appear in CLI):

If you need to include very specific enum set, you can define it by yourself in JSON file with array like this:

["Mozilla Firefox", "Google Chrome", "Netscape Navigator", "Internet Explorer"]

And pass path to this file instead of enum name:

$ ./schema-guru-0.6.2 schema --enum-sets all --enum-sets /path/to/browsers.json /path/to/instances

Schema Guru will derive minLength and maxLength properties for strings based on shortest and longest strings. But this may be a problem if you process small amount of instances. To avoid this too strict Schema, you can use --no-length option.

$ ./schema-guru-0.6.2 schema --no-length /path/to/few-instances

DDL derivation

Like for Schema derivation, for DDL input may be also single file with JSON Schema or directory containing JSON Schemas.

Currently we support DDL only for [Amazon Redshift] redshift, but in future releases you'll be able to specify another with --db option.

Following command will just save Redshift (default --db value) DDL to current dir.

$ ./schema-guru-0.6.2 ddl {{input}}

If you specified as input a directory with several Self-describing JSON Schemas belonging to a single REVISION, Schema Guru will also generate a migrations. So, you can migratte any of previous tables to any of subsequent. For example, having following list of Self-describing JSON Schemas as input:

  • schemas/com.acme/click_event/1-0-0
  • schemas/com.acme/click_event/1-0-1
  • schemas/com.acme/click_event/1-0-2

You will have following migrations as output:

  • sql/com.acme/click_event/1-0-0/1-0-1 to alter table from 1-0-0 to 1-0-1
  • sql/com.acme/click_event/1-0-0/1-0-2 to alter table from 1-0-0 to 1-0-2
  • sql/com.acme/click_event/1-0-1/1-0-2 to alter table from 1-0-1 to 1-0-2

This migrations (and all subsequent table definitions) are aware of column order and it will never put a new column in the middle of table, so you can safely alter your tables while they belong to a single REVISION.

You also can specify directory for output:

$ ./schema-guru-0.6.2 ddl --output {{ddl_dir}} {{input}}

If you're not a Snowplow Platform user, don't use [Self-describing Schema] self-describing or just don't want anything specific to it you can produce raw schema:

$ ./schema-guru-0.6.2 ddl --raw {{input}}

But bear in mind that Self-describing Schemas bring many benefits. For example, raw Schemas will not preserve an order for your columns (it just impossible!) and also you will not have a migrations.

You may also want to get JSONPaths file for Redshift's [COPY] redshift-copy command. It will place jsonpaths dir alongside with sql:

$ ./schema-guru-0.6.2 ddl --with-json-paths {{input}}

The most embarrassing part of shifting from dynamic-typed world to static-typed is product types (or union types) like this in JSON Schema: ["integer", "string"]. How to represent them in SQL DDL? It's a taught question and we think there's no ideal solution. Thus we provide you two options. By default product types will be transformed as most general VARCHAR(4096). But there's another way - you can split column with product types into separate ones with it's types as postfix, for example property model with type ["string", "integer"] will be transformed into two columns mode_string and model_integer. This behavior can be achieved with --split-product-types.

Another thing everyone need to consider is default VARCHAR size. If there's no clues about it (like maxLength) 4096 will be used. You can also specify this default value:

$ ./schema-guru-0.6.2 ddl --varchar-size 32 {{input}}

You can also specify Redshift Schema for your table. For non-raw mode atomic used as default.

$ ./schema-guru-0.6.2 ddl --raw --schema business {{input}}

Some users do not full rely on Schema Guru JSON Schema derivation or DDL generation and edit their DDLs manually. By default, Schema Guru will not override your files (either DDLs and migrations) if user made any significant changes (comments and whitespaces are not significant). Instead Schema Guru will just warn user that file has been changed manually. To change this behavior you may specify --force flag.

$ ./schema-guru-0.6.2 ddl --force {{input}}

Web UI

You can access our hosted demo of the Schema Guru web UI at [schemaguru.snplowanalytics.com] webui-hosted. To run it locally:

$ wget http://dl.bintray.com/snowplow/snowplow-generic/schema_guru_webui_0.6.2.zip
$ unzip schema_guru_webui_0.6.2.zip
$ ./schema-guru-webui-0.6.2

The above will run a Spray web server containing Schema Guru on [0.0.0.0:8000] webui-local. Interface and port can be specified by --interface and --port respectively.

Apache Spark

Since version 0.4.0 Schema Guru shipping with Spark job for deriving JSON Schemas. To help users getting started with Schema Guru on Amazon Elastic MapReduce we provide pyinvoke pyinvoke tasks.py.

Recommended way to start is install all requirements and assembly fatjar as described in Developer Quickstart.

Before run you need:

  • An AWS CLI profile, e.g. my-profile
  • A EC2 keypair, e.g. my-ec2-keypair
  • At least one Amazon S3 bucket, e.g. my-bucket

To provision the cluster and start the job you need to use run_emr task:

$ cd sparkjob
$ inv run_emr my-profile my-bucket/input/ my-bucket/output/ my-bucket/errors/ my-bucket/logs my-ec2-keypair

If you need some specific options for Spark job, you can specify these in tasks.py. The Spark job accepts the same options as the CLI application, but note that --output isn't optional and we have a new optional --errors-path. Also, instead of specifying some of predefined enum sets you can just enable it with --enum-sets flag, so it has the same behaviour as --enum-sets all.

Developer Quickstart

Assuming git, [Vagrant] vagrant-install and [VirtualBox] virtualbox-install installed:

 host$ git clone https://github.com/snowplow/schema-guru.git
 host$ cd schema-guru
 host$ vagrant up && vagrant ssh
guest$ cd /vagrant
guest$ sbt assembly

Also, optional:

guest$ sbt "project schema-guru-webui" assembly
guest$ sbt "project schema-guru-sparkjob" assembly

You can also deploy the Schema Guru web GUI onto Elastic Beanstalk:

guest$ cd beanstalk && zip beanstalk.zip *

Now just create a new Docker app in the [Elastic Beanstalk Console] beanstalk-console and upload this zipfile.

User Manual

Functionality

Schema derivation

  • Takes a directory as an argument and will print out the resulting JsonSchema:
    • Processes each JSON sequentially
    • Merges all results into one master Json Schema
  • Recognizes following JSON Schema formats:
    • uuid
    • date-time (according to ISO-8601)
    • IPv4 and IPv6 addresses
    • HTTP, HTTPS, FTP URLs
  • Recoginzed minLength and maxLength properties for strings
  • Recognizes base64 pattern for strings
  • Detects integer ranges according to Int16, Int32, Int64
  • Detects misspelt properties and produce warnings
  • Detects enum values with specified cardinality
  • Detects known enum sets built-in or specified by user
  • Allows to output [Self-describing JSON Schema] self-describing
  • Allows to produce JSON Schemas with different names based on given JSON Path
  • Supports [Newline Delimited JSON] ndjson

DDL derivation

  • Correctly transforms some of string formats
    • uuid becomes CHAR(36)
    • ipv4 becomes VARCHAR(14)
    • ipv6 becomes VARCHAR(39)
    • date-time becomes TIMESTAMP
  • Handles properties with only enums
  • Property with maxLength(n) and minLength(n) becomes CHAR(n)
  • Can output JSONPaths file
  • Can split product types
  • Number with multiplyOf 0.01 becomes DECIMAL
  • Handles Self-describing JSON and can produce raw DDL
  • Recognizes integer size by minimum and maximum values
  • Object without properties, but with patternProperties becomes VARCHAR(4096)

Assumptions

  • All JSONs in the directory are assumed to be of the same event type and will be merged together
  • All JSONs are assumed to start with either { ... } or [ ... ]
    • If they do not they are discarded
  • Schema should be as strict as possible - e.g. no additionalProperties are allowed currently

Self-describing JSON

schema command allows you to produce [Self-describing JSON Schema] self-describing. To produce it you need to specify vendor, name (if segmentation isn't using, see below), and version (optional, default value is 1-0-0).

$ ./schema-guru-0.6.2 schema --vendor {{your_company}} --name {{schema_name}} --schemaver {{version}} {{input}}

Schema Segmentation

If you have set of mixed JSONs from one vendor, but with slightly different structure, like:

{ "version": 1,
  "type": "track",
  "userId": "019mr8mf4r",
  "event": "Purchased an Item",
  "properties": {
    "revenue": "39.95",
    "shippingMethod": "2-day" },
  "timestamp" : "2012-12-02T00:30:08.276Z" }

and

{ "version": 1,
  "type": "track",
  "userId": "019mr8mf4r",
  "event": "Posted a Comment",
  "properties": {
    "body": "This book is gorgeous!",
    "attachment": false },
  "timestamp" : "2012-12-02T00:28:02.273Z" }

You can run it as follows:

$ ./schema-guru-0.6.2 schema --output {{output_dir}} --schema-by $.event {{mixed_jsons_directory}}

It will put two (or may be more) JSON Schemas into output dir: Purchased_an_Item.json and Posted_a_comment.json. They will be derived from JSONs only with corresponding event property, without any intersections. Assuming that provided JSON Path contain valid string. All schemas where this JSON Path is absent or contains not a string value will be merged into unmatched.json schema in the same output dir. Also, when Self-describing JSON Schema producing, it will take schema name in the same way and --name argument can be omitted (it will replace name specified with option).

Example

Here's an example of some subtle points which a tool working with a single JSON instance would miss.

First instance:

{ "event": {
    "just_a_string": "Any string may be here",
    "sometimes_ip": "192.168.1.101",
    "always_ipv4": "127.0.0.1",
    "id": 43,
    "very_big_int": 9223372036854775102,
    "this_should_be_number": 2.1,
    "nested_object": {
        "title": "Just an nested object",
        "date": "2015-05-29T12:00:00+07:00" }}}

Second instance:

{ "event": {
    "just_a_string": "No particular format",
    "sometimes_ip": "This time it's not an IP",
    "always_ipv4": "192.168.1.101",
    "id": 42,
    "very_big_int": 92102,
    "this_should_be_number": 201,
    "not_always_here": 32,
    "nested_object": {
        "title": "Still plain string without format",
        "date": "1961-07-03T12:00:00+07:00" }}}

The generated schema:

{ "type" : "object",
  "properties" : {
    "event" : {
      "type" : "object",
      "properties" : {
        "just_a_string" : { "type" : "string" },
        "sometimes_ip" : { "type" : "string" },
        "always_ipv4" : {
          "type" : "string",
          "format" : "ipv4" },
        "id" : {
          "type" : "integer",
          "minimum" : 0,
          "maximum" : 32767 },
        "very_big_int" : {
          "type" : "integer",
          "minimum" : 0,
          "maximum" : 9223372036854775807 },
        "this_should_be_number" : {
          "type" : "number",
          "minimum" : 0 },
        "nested_object" : {
          "type" : "object",
          "properties" : {
            "title" : { "type" : "string" },
            "date" : {
              "type" : "string",
              "format" : "date-time" } },
          "additionalProperties" : false },
        "not_always_here" : {
          "type" : "integer",
          "minimum" : 0,
          "maximum" : 32767 } },
      "additionalProperties" : false } },
  "additionalProperties" : false }

Copyright and License

Schema Guru is copyright 2014-2016 Snowplow Analytics Ltd.

Licensed under the [Apache License, Version 2.0] license (the "License"); you may not use this software except in compliance with the License.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

More Repositories

1

factotum

A system to programmatically run data pipelines
Rust
220
star
2

spark-example-project

A Spark WordCountJob example as a standalone SBT project with Specs2 tests, runnable on Amazon EMR
Scala
118
star
3

codeigniter-paypal-ipn

A CodeIgniter library for working with the PayPal IPN (Instant Payment Notification) service
PHP
111
star
4

spark-streaming-example-project

A Spark Streaming job reading events from Amazon Kinesis and writing event counts to DynamoDB
Scala
94
star
5

scalding-example-project

The Scalding WordCountJob example as a standalone SBT project with Specs2 tests, runnable on Amazon EMR
Scala
82
star
6

snowplow-docker

Docker images for Snowplow, Iglu and associated projects
Dockerfile
61
star
7

aws-lambda-scala-example-project

An AWS Lambda function in Scala reading events from Amazon Kinesis and writing event counts to DynamoDB
Scala
57
star
8

symfony2-paypal-ipn

A Symfony2 bundle for working with the PayPal IPN (Instant Payment Notification) service
PHP
56
star
9

sluice

A Ruby toolkit for cloud-friendly ETL
Ruby
38
star
10

google-cloud-dataflow-example-project

Example stream processing job, written in Scala with Apache Beam, for Google Cloud Dataflow
Scala
29
star
11

snowplow-tco-model

UNMAINTAINED. 2013
R
22
star
12

kinesis-example-scala-consumer

Example Scala/SBT event consumer for Amazon Kinesis
Scala
22
star
13

kinesis-example-scala-producer

Example Scala/SBT event producer for Amazon Kinesis
Scala
21
star
14

cloudfront-log-deserializer

A Hive Deserializer for CloudFront access logs (supports download distribution files only)
Java
17
star
15

snowplow.github.com

Legacy Snowplow website, switched off 25 April 2017
HTML
16
star
16

maxmind-geolite-update

A Python script to regularly update MaxMind's free geo databases
Python
15
star
17

icebucket

UNRELEASED. An opinionated framework for analytics-on-write on event streams using key-value storage
Scala
14
star
18

avalanche

Load testing for event analytics platforms (Snowplow, more coming soon)
Scala
13
star
19

dev-environment

Vagrant-based Snowplow development environment with Ansible playbooks to install common tools
Shell
12
star
20

factotum-server

Rust
10
star
21

r-data-science-environment

VM with complete R (RStudio) environment
Shell
9
star
22

prestashop-scala-client

Scala client for the PrestaShop Web Service (aka prestasac)
Scala
9
star
23

engineering-resources

7
star
24

huskimo

🐕 Extracts data from SaaS APIs and stores in Redshift
Scala
7
star
25

bigquery-loader-cli

UNMAINTAINED. Prototype CLI app for uploading Snowplow enriched events to BigQuery
Scala
5
star
26

snowplow-omniture-ingest

Ingests Omniture data (exported as log files) into SnowPlow for more involved analysis
5
star
27

infobright-ruby-loader

A data loader for Infobright, built in Ruby. Modelled on Infobright's own ParaFlex
Ruby
5
star
28

samza-scala-example-project

An Apache Samza stream processing job written in Scala
Scala
5
star
29

redash-java-sdk

Java
4
star
30

nsq-spark-example-project

A Spark job example for integrating NSQ with Spark
Scala
4
star
31

snowplow-gtm-custom-template

GTM Custom Template for the Snowplow JavaScript Tracker (v2)
Smarty
4
star
32

schema-ddl

MOVED. See:
Scala
4
star
33

dataform-data-models

Snowplow Incubator project for Dataform SQL data models for working with Snowplow data. Supports BigQuery only
JavaScript
4
star
34

looker-snowplow-web

A LookML block, that uses data from the Snowplow JavaScript tracker and Web Data Model derived tables and makes it available for exploration in Looker.
LookML
4
star
35

iglu-ruby-client

Ruby and JRuby client for Iglu
Ruby
3
star
36

neo4j-data-science-environment

VM with Neo4j installed
Shell
3
star
37

scala-serf-client

Minimal wrapper around https://github.com/tv2norge/java-serf-client
Scala
3
star
38

sp-js-assets

Contains all of the Snowplow JavaScript Tracker assets.
JavaScript
3
star
39

python-data-science-environment

Shell
3
star
40

snowplow-scala-project.g8

Shell
3
star
41

hive-example-udf

Java
3
star
42

makefile-rs

WIP Rust crate for parsing extremely simple Makefiles
Rust
2
star
43

right-to-be-forgotten-spark-job

Spark job for right to be forgotten
Scala
2
star
44

spark-data-science-environment

VM with Spark ready-to-go
Shell
2
star
45

piinguin

A micro-service to securely store pseudonomized PII data
Scala
2
star
46

graph-event-data-model

Schemas for nodes, relationships and events
2
star
47

event-manifest-cleaner

A Spark job that takes records straight from the failed enriched good directory and deletes exactly those from DynamoDB
Scala
2
star
48

snowplow-piinguin-relay

Snowplow Relay for feeding PII transformation events from Snowplow into Piinguin
Scala
2
star
49

scalacheck-schema

ScalaCheck generators for various Iglu-compatible schema formats
Scala
2
star
50

narcolepsy-scala

A Scala framework for building typesafe clients for RESTful web services
Scala
2
star
51

snowplow-clickhouse-loader

Scala
1
star
52

bintray-usage-alerter

Alerts PagerDuty when malicious downloaders target your Bintray files
Crystal
1
star
53

blob2stream

Reads records from cloud blob storage and writes to cloud stream
Scala
1
star
54

blix-javascript

Blix is a JavaScript library for adding surveys, coupons and flash messages to websites
JavaScript
1
star
55

iglu-objc-client

Objective-C client for Iglu
Objective-C
1
star
56

snowplow-cdc-source

Scala
1
star
57

vendor-matrix

1
star
58

snowplow-azure-data-lake-analytics-extractor

1
star
59

indicative-data-model

A data model for transforming Snowplow Staged Events for Indicative
1
star
60

snowplow-browser-plugin-simple-template

A simple template for creating and publishing a Browser Plugin for the Snowplow JavaScript Trackers
JavaScript
1
star