• This repository has been archived on 02/Jun/2020
  • Stars
    star
    616
  • Rank 72,837 (Top 2 %)
  • Language
    PHP
  • License
    GNU General Publi...
  • Created almost 5 years ago
  • Updated over 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

2019 Wuhan Coronavirus data (COVID-19 / 2019-nCoV)

2019 Wuhan Coronavirus data (COVID-19 / 2019-nCoV)

This public repository archives data over time from various public sources on the web.

Data is presented as timestamped CSV files, for maximum compatibility.

It is hoped that this data will be useful to those producing visualizations or analyses.

Code is included.

No longer updated

Please note this data is no longer updated. It substantially covers the significant period of growth of the virus in China and should be useful for historical analysis.

Sample animation

Shown here in GIF format. There is a better (smaller/higher resolution) webm format also generated.

image

Sample visualization

image

image

Generates static SVGs.

Source images were China_blank_province_map.svg(link) and BlankMap-World.svg(link).

Requirements

Unix-like OS with the dependencies installed (see Software Dependencies). In practice that means macOS with brew, Linux or a BSD. Windows is unsupported.

Generating

China

For a China map, the following command sequence will grab data from DXY and render it.

./build china

You now have timestamped JSON, CSV and SVG files in the data-sources/dxy/data/ subdirectory.

World

For a world map, the process is similar. Note that the BNO world data parser is currently broken and we have no plan to fix it.

./build world

You now have timestamped CSV and SVG files in data-sources/bno/data.

Software Dependencies

Probably an incomplete list:

  • bash
  • perl
  • php
  • imagemagick
  • gifsicle
  • ffmpeg
  • wget

TODO

Links of note

Other projects

How this was built (non-technical explanation)

This section is written for the curious / non-technical user.

The general approach to problems such as these is as follows:

  1. Gather the data
  2. Modify and store it
  3. Do something with it.

Gather the data

The area of programming surrounding gathering data from websites that were not explicitly designed for it is called web scraping.

In general, web scraping consists of making an HTTP (web) request to the website in question, parsing (or interpreting) the response, and extracting the data of interest. Thereafter some modification may be required.

Modify and store it

We translate some Chinese and English information (toponyms or geographic region names) in to a known format by matching against a static database file for countries and a similar file for regions in or near China.

We then store the data in various formats, mostly CSV and JSON, which are timestamped in a most to least significance format, inspired by the ISO 8601 standard, to aid in sorting.

Do something with it

Finally, we further interpret and process the data in two stages.

Static image generation

First, we transform some reference SVG maps gathered from Wikimedia Commons by applying the data we have captured.

Combine in to an animation

Finally we animate multiple such resulting images in to two formats, animated GIF and the greatly superior and far more modern webm container format with VP9 encoding. This is done using the open source tools imagemagick and ffmpeg.