• Stars
    star
    147
  • Rank 243,383 (Top 5 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 2 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Tools for analyzing Git history using SQLite

git-history

PyPI Changelog Tests License

Tools for analyzing Git history using SQLite

For background on this project see git-history: a tool for analyzing scraped data collected using Git and SQLite.

Measuring traffic during the Half Moon Bay Pumpkin Festival describes a project using this tool in detail.

Installation

Install this tool using pip:

$ pip install git-history

Demos

git-history-demos.datasette.io hosts three example databases created using this tool:

The demos are deployed using Datasette on Google Cloud Run by this GitHub Actions workflow.

Usage

This tool can be run against a Git repository that holds a file that contains JSON, CSV/TSV or some other format and which has multiple versions tracked in the Git history. Read Git scraping: track changes over time by scraping to a Git repository to understand how you might create such a repository.

The file command analyzes the history of an individual file within the repository, and generates a SQLite database table that represents the different versions of that file over time.

The file is assumed to contain multiple objects - for example, the results of scraping an electricity outage map or a CSV file full of records.

Assuming you have a file called incidents.json that is a JSON array of objects, with multiple versions of that file recorded in a repository. Each version of that file might look something like this:

[
    {
        "IncidentID": "abc123",
        "Location": "Corner of 4th and Vermont",
        "Type": "fire"
    },
    {
        "IncidentID": "cde448",
        "Location": "555 West Example Drive",
        "Type": "medical"
    }
]

Change directory into the GitHub repository in question and run the following:

git-history file incidents.db incidents.json

This will create a new SQLite database in the incidents.db file with three tables:

  • commits containing a row for every commit, with a hash column, the commit_at date and a foreign key to a namespace.
  • item containing a row for every item in every version of the filename.json file - with an extra _commit column that is a foreign key back to the commit table.
  • namespaces containing a single row. This allows you to build multiple tables for different files, using the --namespace option described below.

The database schema for this example will look like this:

CREATE TABLE [namespaces] (
   [id] INTEGER PRIMARY KEY,
   [name] TEXT
);
CREATE UNIQUE INDEX [idx_namespaces_name]
    ON [namespaces] ([name]);
CREATE TABLE [commits] (
   [id] INTEGER PRIMARY KEY,
   [namespace] INTEGER REFERENCES [namespaces]([id]),
   [hash] TEXT,
   [commit_at] TEXT
);
CREATE UNIQUE INDEX [idx_commits_namespace_hash]
    ON [commits] ([namespace], [hash]);
CREATE TABLE [item] (
   [IncidentID] TEXT,
   [Location] TEXT,
   [Type] TEXT,
   [_commit] INTEGER REFERENCES [commits]([id])
);

If you have 10 historic versions of the incidents.json file and each one contains 30 incidents, you will end up with 10 * 30 = 300 rows in your item table.

Track the history of individual items using IDs

If your objects have a unique identifier - or multiple columns that together form a unique identifier - you can use the --id option to de-duplicate and track changes to each of those items over time.

This provides a much more interesting way to apply this tool.

If there is a unique identifier column called IncidentID you could run the following:

git-history file incidents.db incidents.json --id IncidentID

The database schema used here is very different from the one used without the --id option.

If you have already imported history, the command will skip any commits that it has seen already and just process new ones. This means that even though an initial import could be slow subsequent imports should run a lot faster.

This command will create six tables - commits, item, item_version, columns, item_changed and namespaces.

Here's the full schema:

CREATE TABLE [namespaces] (
   [id] INTEGER PRIMARY KEY,
   [name] TEXT
);
CREATE UNIQUE INDEX [idx_namespaces_name]
    ON [namespaces] ([name]);
CREATE TABLE [commits] (
   [id] INTEGER PRIMARY KEY,
   [namespace] INTEGER REFERENCES [namespaces]([id]),
   [hash] TEXT,
   [commit_at] TEXT
);
CREATE UNIQUE INDEX [idx_commits_namespace_hash]
    ON [commits] ([namespace], [hash]);
CREATE TABLE [item] (
   [_id] INTEGER PRIMARY KEY,
   [_item_id] TEXT
, [IncidentID] TEXT, [Location] TEXT, [Type] TEXT, [_commit] INTEGER);
CREATE UNIQUE INDEX [idx_item__item_id]
    ON [item] ([_item_id]);
CREATE TABLE [item_version] (
   [_id] INTEGER PRIMARY KEY,
   [_item] INTEGER REFERENCES [item]([_id]),
   [_version] INTEGER,
   [_commit] INTEGER REFERENCES [commits]([id]),
   [IncidentID] TEXT,
   [Location] TEXT,
   [Type] TEXT,
   [_item_full_hash] TEXT
);
CREATE TABLE [columns] (
   [id] INTEGER PRIMARY KEY,
   [namespace] INTEGER REFERENCES [namespaces]([id]),
   [name] TEXT
);
CREATE UNIQUE INDEX [idx_columns_namespace_name]
    ON [columns] ([namespace], [name]);
CREATE TABLE [item_changed] (
   [item_version] INTEGER REFERENCES [item_version]([_id]),
   [column] INTEGER REFERENCES [columns]([id]),
   PRIMARY KEY ([item_version], [column])
);
CREATE VIEW item_version_detail AS select
  commits.commit_at as _commit_at,
  commits.hash as _commit_hash,
  item_version.*,
  (
    select json_group_array(name) from columns
    where id in (
      select column from item_changed
      where item_version = item_version._id
    )
) as _changed_columns
from item_version
  join commits on commits.id = item_version._commit;
CREATE INDEX [idx_item_version__item]
    ON [item_version] ([_item]);

item table

The item table will contain the most recent version of each row, de-duplicated by ID, plus the following additional columns:

  • _id - a numeric integer primary key, used as a foreign key from the item_version table.
  • _item_id - a hash of the values of the columns specified using the --id option to the command. This is used for de-duplication when processing new versions.
  • _commit - a foreign key to the commit table, representing the most recent commit to modify this item.

item_version table

The item_version table will contain a row for each captured differing version of that item, plus the following columns:

  • _id - a numeric ID for the item version record.
  • _item - a foreign key to the item table.
  • _version - the numeric version number, starting at 1 and incrementing for each captured version.
  • _commit - a foreign key to the commit table.
  • _item_full_hash - a hash of this version of the item. This is used internally by the tool to identify items that have changed between commits.

The other columns in this table represent columns in the original data that have changed since the previous version. If the value has not changed, it will be represented by a null.

If a value was previously set but has been changed back to null it will still be represented as null in the item_version row. You can identify these using the item_changed many-to-many table described below.

You can use the --full-versions option to store full copies of the item at each version, rather than just storing the columns that have changed.

item_version_detail view

This SQL view joins item_version against commits to add three further columns: _commit_at with the date of the commit, and _commit_hash with the Git commit hash.

item_changed

This many-to-many table indicates exactly which columns were changed in an item_version.

  • item_version is a foreign key to a row in the item_version table.
  • column is a foreign key to a row in the columns table.

This table with have the largest number of rows, which is why it stores just two integers in order to save space.

columns

The columns table stores column names. It is referenced by item_changed.

  • id - an integer ID.
  • name - the name of the column.
  • namespace - a foreign key to namespaces, for if multiple file histories are sharing the same database.

Reserved column names

Note that _id, _item_full_hash, _item, _item_id, _version, _commit, _item_id, _commit_at, _commit_hash, _changed_columns, rowid are considered reserved column names for the purposes of this tool.

If your data contains any of these they will be renamed to add a trailing underscore, for example _id_, _item_, _version_, to avoid clashing with the reserved columns.

If you have a column with a name such as _commit_ it will be renamed too, adding an additional trailing underscore, so _commit_ becomes _commit__ and _commit__ becomes _commit___.

Additional options

  • --repo DIRECTORY - the path to the Git repository, if it is not the current working directory.
  • --branch TEXT - the Git branch to analyze - defaults to main.
  • --id TEXT - as described above: pass one or more columns that uniquely identify a record, so that changes to that record can be calculated over time.
  • --full-versions - instead of recording just the columns that have changed in the item_version table record a full copy of each version of theh item.
  • --ignore TEXT - one or more columns to ignore - they will not be included in the resulting database.
  • --csv - treat the data is CSV or TSV rather than JSON, and attempt to guess the correct dialect
  • --dialect - use a spcific CSV dialect. Options are excel, excel-tab and unix - see the Python CSV documentation for details.
  • --skip TEXT - one or more full Git commit hashes that should be skipped. You can use this if some of the data in your revision history is corrupted in a way that prevents this tool from working.
  • --start-at TEXT - skip commits prior to the specified commit hash.
  • --start-after TEXT - skip commits up to and including the specified commit hash, then start processing from the following commit.
  • --convert TEXT - custom Python code for a conversion, described below.
  • --import TEXT - additional Python modules to import for --convert.
  • --ignore-duplicate-ids - if a single version of a file has the same ID in it more than once, the tool will exit with an error. Use this option to ignore this and instead pick just the first of the two duplicates.
  • --namespace TEXT - use this if you wish to include the history of multiple different files in the same database. The default is item but you can set it to something else, which will produce tables with names like yournamespace and yournamespace_version.
  • --wal - Enable WAL mode on the created database file. Use this if you plan to run queries against the database while git-history is creating it.
  • --silent - don't show the progress bar.

CSV and TSV data

If the data in your repository is a CSV or TSV file you can process it by adding the --csv option. This will attempt to detect which delimiter is used by the file, so the same option works for both comma- and tab-separated values.

git-history file trees.db trees.csv --id TreeID

You can also specify the CSV dialect using the --dialect option.

Custom conversions using --convert

If your data is not already either CSV/TSV or a flat JSON array, you can reshape it using the --convert option.

The format needed by this tool is an array of dictionaries, as demonstrated by the incidents.json example above.

If your data does not fit this shape, you can provide a snippet of Python code to converts the on-disk content of each stored file into a Python list of dictionaries.

For example, if your stored files each look like this:

{
    "incidents": [
        {
            "id": "552",
            "name": "Hawthorne Fire",
            "engines": 3
        },
        {
            "id": "556",
            "name": "Merlin Fire",
            "engines": 1
        }
    ]
}

You could use the following Python snippet to convert them to the required format:

json.loads(content)["incidents"]

(The json module is exposed to your custom function by default.)

You would then run the tool like this:

git-history file database.db incidents.json \
  --id id \
  --convert 'json.loads(content)["incidents"]'

The content variable is always a bytes object representing the content of the file at a specific moment in the repository's history.

You can import additional modules using --import. This example shows how you could read a CSV file that uses ; as the delimiter:

git-history file trees.db ../sf-tree-history/Street_Tree_List.csv \
  --repo ../sf-tree-history \
  --import csv \
  --import io \
  --convert '
    fp = io.StringIO(content.decode("utf-8"))
    return list(csv.DictReader(fp, delimiter=";"))
    ' \
  --id TreeID

You can import nested modules such as ElementTree using --import xml.etree.ElementTree, then refer to them in your function body as xml.etree.ElementTree. For example, if your tracked data was in an items.xml file that looked like this:

<items>
  <item id="1" name="One" />
  <item id="2" name="Two" />
  <item id="3" name="Three" />
</item>

You could load it using the following --convert script:

git-history file items.xml --convert '
tree = xml.etree.ElementTree.fromstring(content)
return [el.attrib for el in tree.iter("item")]
' --import xml.etree.ElementTree --id id

If your Python code spans more than one line it needs to include a return statement.

You can also use Python generators in your --convert code, for example:

git-history file stats.db package-stats/stats.json \
    --repo package-stats \
    --convert '
    data = json.loads(content)
    for key, counts in data.items():
        for date, count in counts.items():
            yield {
                "package": key,
                "date": date,
                "count": count
            }
    ' --id package --id date

This conversion function expects data that looks like this:

{
    "airtable-export": {
        "2021-05-18": 66,
        "2021-05-19": 60,
        "2021-05-20": 87
    }
}

Development

To contribute to this tool, first checkout the code. Then create a new virtual environment:

cd git-history
python -m venv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and test dependencies:

pip install -e '.[test]'

To run the tests:

pytest

To update the schema examples in this README file:

cog -r README.md

More Repositories

1

datasette

An open source multi-tool for exploring and publishing data
Python
7,807
star
2

sqlite-utils

Python CLI utility and library for manipulating SQLite databases
Python
1,191
star
3

shot-scraper

A command-line utility for taking automated screenshots of websites
Python
1,006
star
4

csvs-to-sqlite

Convert CSV files into a SQLite database
Python
758
star
5

til

Today I Learned
HTML
719
star
6

django-sql-dashboard

Django app for building dashboards using raw SQL queries
Python
400
star
7

simonw

https://simonwillison.net/2020/Jul/10/self-updating-profile-readme/
Python
362
star
8

llm

Access large language models from the command-line
Python
309
star
9

db-to-sqlite

CLI tool for exporting tables or queries from any SQL database to a SQLite file
Python
302
star
10

djangode

Utilities functions for node.js that borrow some useful concepts from Django
JavaScript
256
star
11

csv-diff

Python CLI tool and library for diffing CSV and JSON files
Python
238
star
12

datasette-lite

Datasette running in your browser using WebAssembly and Pyodide
HTML
237
star
13

shot-scraper-template

Template repository for setting up shot-scraper
217
star
14

geocoders

Ultra simple API for geocoding a single string against various web services.
Python
184
star
15

ca-fires-history

Tracking fire data from www.fire.ca.gov
165
star
16

django-openid

A modern library for integrating OpenID with Django - incomplete, but really nearly there (promise)
Python
163
star
17

openai-to-sqlite

Save OpenAI API results to a SQLite database
Python
161
star
18

action-transcription

A tool for creating a repository of transcribed videos
Python
158
star
19

s3-credentials

A tool for creating credentials for accessing S3 buckets
Python
149
star
20

google-drive-to-sqlite

Create a SQLite database containing metadata from Google Drive
Python
142
star
21

django-queryset-transform

Experimental .transform(fn) method for Django QuerySets, for clever lazily evaluated optimisations.
Python
142
star
22

ratelimitcache

A memcached backed rate limiting decorator for Django.
Python
141
star
23

optfunc

Syntactic sugar for creating Python command line scripts by introspecting a function definition
Python
134
star
24

djng

Turtles all the way down
Python
129
star
25

cougar-or-not

An API for identifying cougars v.s. bobcats v.s. other USA cat species
Jupyter Notebook
119
star
26

simonwillisonblog

The source code behind my blog
JavaScript
118
star
27

advent-of-code-2022-in-rust

Copilot-assisted Advent of Code 2022 to learn Rust
Rust
114
star
28

djangopeople.net

A geographical community site for Django developers.
Python
111
star
29

scrape-chatgpt-plugin-prompts

Shell
107
star
30

s3-ocr

Tools for running OCR against files stored in S3
Python
103
star
31

datasette-app

The Datasette macOS application
JavaScript
100
star
32

django-redis-monitor

Request per second / SQLop per second monitoring for Django, using Redis for storage
Python
97
star
33

python-lib

Opinionated cookiecutter template for creating a new Python library
Python
97
star
34

ttok

Count and truncate text based on tokens
Python
96
star
35

mytweets

Script for saving a JSON archive of your tweets.
Python
81
star
36

airtable-export

Export Airtable data to YAML, JSON or SQLite files on disk
Python
79
star
37

datasette-graphql

Datasette plugin providing an automatic GraphQL API for your SQLite databases
Python
77
star
38

llm-mlc

LLM plugin for running models using MLC
Python
74
star
39

strip-tags

CLI tool for stripping tags from HTML
Python
73
star
40

django_cropper

Integration of jCrop with the Django admin
Python
71
star
41

click-app

Cookiecutter template for creating new Click command-line tools
Python
70
star
42

datasette-ripgrep

Web interface for searching your code using ripgrep, built as a Datasette plugin
Python
69
star
43

download-esm

Download ESM modules from npm and jsdelivr
Python
67
star
44

datasette.io

The official project website for Datasette
HTML
66
star
45

ftfy-web

Paste in some broken unicode text and FTFY will tell you how to fix it!
Python
63
star
46

markdown-to-sqlite

CLI tool for loading markdown files into a SQLite database
Python
63
star
47

sqlite-diffable

Tools for dumping/loading a SQLite database to diffable directory structure
Python
62
star
48

sqlite-history

Track changes to SQLite tables using triggers
Python
62
star
49

yaml-to-sqlite

Utility for converting YAML files to SQLite
Python
62
star
50

covid-19-datasette

Deploys a Datasette instance of COVID-19 data from Johns Hopkins CSSE and the New York Times
Python
61
star
51

dogproxy

Experimental HTTP proxy (using node.js) for avoiding the dog pile effect.
JavaScript
61
star
52

soupselect

CSS selector support for BeautifulSoup.
Python
60
star
53

laion-aesthetic-datasette

Use Datasette to explore LAION improved_aesthetics_6plus training data used by Stable DIffusion
Python
57
star
54

datasette-cluster-map

Datasette plugin that shows a map for any data with latitude/longitude columns
JavaScript
55
star
55

action-transcription-demo

A tool for creating a repository of transcribed videos
Python
53
star
56

datasette-vega

Datasette plugin for visualizing data using Vega
JavaScript
52
star
57

pge-outages-pre-2024

Tracking PG&E outages
Python
51
star
58

google-calendar-to-sqlite

Create a SQLite database containing your data from Google Calendar
Python
50
star
59

url-map

Use URL parameters to generate a map with markers, using Leaflet and OpenStreetMap
HTML
49
star
60

disaster-scrapers

Scrapers for disaster data - writes to https://github.com/simonw/disaster-data
Python
46
star
61

geojson-to-sqlite

CLI tool for converting GeoJSON files to SQLite (with SpatiaLite)
Python
45
star
62

asgi-csrf

ASGI middleware for protecting against CSRF attacks
Python
44
star
63

datasette-chatgpt-plugin

A Datasette plugin that turns a Datasette instance into a ChatGPT plugin
Python
44
star
64

nodecast

A simple comet broadcast server, originally implemented as a demo for Full Frontal 2009.
JavaScript
44
star
65

bugle_project

Group collaboration tools for hackers in forts.
Python
42
star
66

django-html

A way of rendering django.forms widgets that differentiates between HTML and XHTML.
Python
42
star
67

datasette-auth-github

Datasette plugin that authenticates users against GitHub
Python
41
star
68

puppeteer-screenshot

Vercel app for taking screenshots of web pages using Puppeteer
JavaScript
40
star
69

llm-replicate

LLM plugin for models hosted on Replicate
Python
40
star
70

python-lib-template-repository

GitHub template repository for creating new Python libraries, using the simonw/python-lib cookiecutter template
39
star
71

django-signed

Signing utilities for Django, to try out an API which is being proposed for inclusion in Django core.
Python
37
star
72

museums

A website recommending niche museums to visit
JavaScript
36
star
73

pypi-rename

Cookiecutter template for creating renamed PyPI packages
Python
36
star
74

help-scraper

Record a history of --help for various commands
Python
35
star
75

dbf-to-sqlite

CLI tool for converting DBF files (dBase, FoxPro etc) to SQLite
Python
35
star
76

asyncinject

Run async workflows using pytest-fixtures-style dependency injection
Python
35
star
77

disaster-data

Data scraped by https://github.com/simonw/disaster-scrapers
35
star
78

datasette-publish-vercel

Datasette plugin for publishing data using Vercel
Python
34
star
79

gzthermal-web

A web interface to gzthermal by caveman on encode.ru
Python
32
star
80

asgi-auth-github

ASGI middleware that authenticates users against GitHub
Python
31
star
81

json-head

JSON microservice for performing HEAD requests
Python
31
star
82

s3-image-proxy

A tiny proxy for serving and resizing images fetched from a private S3 bucket
Python
31
star
83

django-safeform

CSRF protection for Django forms.
Python
31
star
84

sqlite-transform

Tool for running transformations on columns in a SQLite database
Python
30
star
85

webhook-relay

A simple Node.js server for queueing and relaying webhook requests
JavaScript
30
star
86

datasette-tiddlywiki

Run TiddlyWiki in Datasette and save Tiddlers to a SQLite database
HTML
29
star
87

image-diff

CLI tool for comparing images
Python
29
star
88

sf-tree-history

Tracking the history of trees in San Francisco
29
star
89

getlatlon.com

Source code for getlatlon.com - a simple, single page, pure JavaScript Google Maps application.
29
star
90

scrape-hacker-news-by-domain

Scrape HN to track links from specific domains
JavaScript
28
star
91

timezones-api

A Datasette-powered API for finding the time zone for a latitude/longitude point
Python
26
star
92

owlsnearme

A website that tells you where your nearest owls are!
JavaScript
26
star
93

datasette-table

A Web Component for embedding a Datasette table on a page
JavaScript
26
star
94

xml-analyser

Simple command line tool for quickly analysing the structure of an arbitrary XML file
Python
26
star
95

shapefile-to-sqlite

Load shapefiles into a SQLite (optionally SpatiaLite) database
Python
26
star
96

cdc-vaccination-history

A git scraper recording the CDC's Covid Data Tracker numbers on number of vaccinations per state.
Python
24
star
97

json-flatten

Python functions for flattening a JSON object to a single dictionary of pairs, and unflattening that dictionary back to a JSON object
Python
24
star
98

datasette-json-html

Datasette plugin for rendering HTML based on JSON values
Python
24
star
99

shot-scraper-demo

Live demo of shot-scraper
23
star
100

djangocon-2022-productivity

Supporting links for my DjangoCon 2022 talk
23
star