• Stars
    star
    171
  • Rank 214,420 (Top 5 %)
  • Language
    Jupyter Notebook
  • License
    Other
  • Created over 5 years ago
  • Updated 12 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Examples of using Raster Vision on open datasets

Raster Vision Examples (for RV < 0.12)

This repository contains examples of using Raster Vision on open datasets.

โš ๏ธ For RV >= 0.12, the examples have moved into the main repo.

Note: The master branch of this examples repo should be used in conjunction with the master branch (or latest Docker image tag) of Raster Vision which contains the latest changes. For versions of this examples repo that correspond to stable, released versions of Raster Vision, see:

Table of Contents:

Setup and Requirements

โš ๏ธ PyTorch vs. Tensorflow Backends

We have recently added a set of PyTorch-based backends to Raster Vision. The existing Tensorflow-based backends are still there, but we do not plan on maintaining them, so we suggest starting to use the PyTorch ones. The examples in this repo default to using the PyTorch backends. However, for three of the examples there is a use_tf option which allows running it using a Tensorflow backend: examples.cowc.object_detection, examples.potsdam.semantic_segmentation, and examples.spacenet.rio.chip_classification.

Docker

You'll need docker (preferably version 18 or above) installed. After cloning this repo, to build the Docker images, run the following command:

> docker/build

This will pull down the latest raster-vision:pytorch-latest, raster-vision:tf-cpu-latest, and raster-vision:tf-gpu-latest Docker images and add some of this repo's code to them. If you only want the Tensorflow images, use the --tf flag, and similar for --pytorch. Before running the container, set an environment variable to a local directory in which to store data.

> export RASTER_VISION_DATA_DIR="/path/to/data"

To run a Bash console in the Docker container, invoke:

> docker/run

This will mount the following local directories to directories inside the container:

  • $RASTER_VISION_DATA_DIR -> /opt/data/
  • examples/ -> /opt/src/examples/

This script also has options for forwarding AWS credentials, running Jupyter notebooks, and switching between different images, which can be seen below.

Remember to use the correct image for the backend you are using!

> ./docker/run --help
Usage: run <options> <command>
Run a console in the raster-vision-examples-cpu Docker image locally.

Environment variables:
RASTER_VISION_DATA_DIR (directory for storing data; mounted to /opt/data)
AWS_PROFILE (optional AWS profile)
RASTER_VISION_REPO (optional path to main RV repo; mounted to /opt/src)

Options:
--aws forwards AWS credentials (sets AWS_PROFILE env var and mounts ~/.aws to /root/.aws)
--tensorboard maps port 6006
--gpu use the NVIDIA runtime and GPU image
--name sets the name of the running container
--jupyter forwards port 8888, mounts ./notebooks to /opt/notebooks, and runs Jupyter
--debug maps port 3007 on localhost to 3000 inside container
--tf-gpu use raster-vision-examples-tf-gpu image and nvidia runtime
--tf-cpu use raster-vision-examples-tf-cpu image
--pytorch-gpu use raster-vision-examples-pytorch image and nvidia runtime

Note: raster-vision-examples-pytorch image is used by default
All arguments after above options are passed to 'docker run'.

Debug Mode

For debugging, it can be helpful to use a local copy of the Raster Vision source code rather than the version baked into the default Docker image. To do this, you can set the RASTER_VISION_REPO environment variable to the location of the main repo on your local filesystem. If this is set, docker/build will set the base image to raster-vision-{cpu,gpu}, and docker/run will mount $RASTER_VISION_REPO/rastervision to /opt/src/rastervision inside the container. You can then set breakpoints in your local copy of Raster Vision in order to debug experiments running inside the container.

How to Run an Example

There is a common structure across all of the examples which represents a best practice for defining experiments. Running an example involves the following steps.

  • Acquire raw dataset.
  • (Optional) Get processed dataset which is derived from the raw dataset, either using a Jupyter notebook, or by downloading the processed dataset.
  • (Optional) Do an abbreviated test run of the experiment on a small subset of data locally.
  • Run full experiment on GPU.
  • Inspect output
  • (Optional) Make predictions on new imagery

Each of the experiments has several arguments that can be set on the command line:

  • The input data for each experiment is divided into two directories: the raw data which is publicly distributed, and the processed data which is derived from the raw data. These two directories are set using the raw_uri and processed_uri arguments.
  • The output generated by the experiment is stored in the directory set by the root_uri argument.
  • The raw_uri, processed_uri, and root_uri can each be local or remote (on S3), and don't need to agree on whether they are local or remote.
  • Experiments have a test argument which runs an abbreviated experiment for testing/debugging purposes.

In the next section, we describe in detail how to run one of the examples, SpaceNet Rio Chip Classification. For other examples, we only note example-specific details.

SpaceNet Rio Building Chip Classification

This example performs chip classification to detect buildings on the Rio AOI of the SpaceNet dataset.

Step 1: Acquire Raw Dataset

The dataset is stored on AWS S3 at s3://spacenet-dataset. You will need an AWS account to access this dataset, but it will not be charged for accessing it. (To forward you AWS credentials into the container, use docker/run --aws).

Optional: to run this example with the data stored locally, first copy the data using something like the following inside the container.

aws s3 sync s3://spacenet-dataset/AOIs/AOI_1_Rio/ /opt/data/spacenet-dataset/AOIs/AOI_1_Rio/

Step 2: Run the Jupyter Notebook

You'll need to do some data preprocessing, which we can do in the Jupyter notebook supplied.

> docker/run --jupyter [--aws]

The --aws option is only needed if pulling data from S3. In Jupyter inside the browser, navigate to the spacenet/spacenet_rio_chip_classification_data_prep.ipynb notebook. Set the URIs in the first cell and then run the rest of the notebook. Set the processed_uri to a local or S3 URI depending on where you want to run the experiment.

Jupyter Notebook

Step 3: Do a test run locally

The experiment we want to run is in examples/spacenet/rio/chip_classification.py. To run this, first get to the Docker console using:

> docker/run [--aws] [--gpu] [--tensorboard]

The --aws option is only needed if running experiments on AWS or using data stored on S3. The --gpu option should only be used if running on a local GPU. The --tensorboard option should be used if running locally and you would like to view Tensorboard. The test run can be executed using something like:

export RAW_URI="s3://spacenet-dataset/"
export PROCESSED_URI="/opt/data/examples/spacenet/rio/processed-data"
export ROOT_URI="/opt/data/examples/spacenet/rio/local-output"
rastervision run local -e examples.spacenet.rio.chip_classification \
    -a raw_uri $RAW_URI -a processed_uri $PROCESSED_URI -a root_uri $ROOT_URI \
    -a test True --splits 2

The sample above assumes that the raw data is on S3, and the processed data and output are stored locally. The raw_uri directory is assumed to contain an AOI_1_Rio subdirectory. This runs two parallel jobs for the chip and predict commands via --splits 2. See rastervision --help and rastervision run --help for more usage information.

Note that when running with -a test True, some crops of the test scenes are created and stored in processed_uri/crops/. All of the examples that use big image files use this trick to make the experiment run faster in test mode.

After running this, the main thing to check is that it didn't crash, and that the debug chips look correct. The debug chips can be found in the debug zip files in $ROOT_URI/chip/spacenet-rio-chip-classification/.

Step 4: Run full experiment

To run the full experiment on GPUs using AWS Batch, use something like the following. Note that all the URIs are on S3 since remote instances will not have access to your local file system.

export RAW_URI="s3://spacenet-dataset/"
export PROCESSED_URI="s3://mybucket/examples/spacenet/rio/processed-data"
export ROOT_URI="s3://mybucket/examples/spacenet/rio/remote-output"
rastervision run aws_batch -e examples.spacenet.rio.chip_classification \
    -a raw_uri $RAW_URI -a processed_uri $PROCESSED_URI -a root_uri $ROOT_URI \
    -a test False --splits 8

For instructions on setting up AWS Batch resources and configuring Raster Vision to use them, see AWS Batch Setup. To monitor the training process using Tensorboard, visit <public dns>:6006 for the EC2 instance running the training job.

If you would like to run on a local GPU, replace aws_batch with local, and use local URIs. To monitor the training process using Tensorboard, visit localhost:6006, assuming you used docker/run --tensorboard.

Step 5: Inspect results

After everything completes, which should take about 1.5 hours if you're running on AWS using a p3.2xlarge instance for training and 8 splits, you should be able to find the predictions over the validation scenes in $root_uri/predict/spacenet-rio-chip-classification/. The evaluation metrics can be found in $root_uri/eval/spacenet-rio-chip-classification/eval.json. This is an example of the scores from a run, which show an F1 score of 0.96 for detecting chips with buildings.

[
    {
        "gt_count": 1460.0,
        "count_error": 0.0,
        "f1": 0.962031922725018,
        "class_name": "building",
        "recall": 0.9527397260273971,
        "precision": 0.9716098420590342,
        "class_id": 1
    },
    {
        "gt_count": 2314.0,
        "count_error": 0.0,
        "f1": 0.9763865660344931,
        "class_name": "no_building",
        "recall": 0.9822817631806394,
        "precision": 0.9706292067263268,
        "class_id": 2
    },
    {
        "gt_count": 3774.0,
        "count_error": 0.0,
        "f1": 0.970833365390128,
        "class_name": "average",
        "recall": 0.9708532061473236,
        "precision": 0.9710085728062825,
        "class_id": -1
    }
]

Step 6: Predict on new imagery

After running an experiment, a predict package is saved into $root_uri/bundle/spacenet-rio-chip-classification/. This can be used to make predictions on new images. See the Model Zoo section for more details.

Visualization using QGIS

To visualize a Raster Vision experiment, you can use QGIS to display the imagery, ground truth, and predictions associated with each scene. Although it's possible to just drag and drop files into QGIS, it's often more convenient to write a script to do this. Here is an example of a script to visualize the results for SpaceNet Vegas Semantic Segmentation.

Other Examples

SpaceNet Rio Building Semantic Segmentation

This experiment trains a semantic segmentation model to find buildings using the SpaceNet Rio dataset. A prerequisite is running the Rio Chip Classification Jupyter notebook, and all other details are the same as in that example.

Below are sample predictions and eval metrics.

SpaceNet Rio Building Semantic Segmentation

Eval Metrics
"overall": [
    {
        "recall": 0.6933642097495366,
        "precision": 0.7181072275154092,
        "class_name": "Building",
        "gt_count": 11480607,
        "count_error": 119679.64457523893,
        "f1": 0.7023217656506746,
        "class_id": 1
    },
    {
        "recall": 0.978149141560173,
        "precision": 0.9763586125303796,
        "class_name": "Background",
        "gt_count": 147757124,
        "count_error": 31820.188126279452,
        "f1": 0.9771849696422493,
        "class_id": 2
    },
    {
        "recall": 0.9576169230896666,
        "precision": 0.9577393905661922,
        "class_name": "average",
        "gt_count": 159237731,
        "count_error": 38154.615804881076,
        "f1": 0.9573680807430468,
        "class_id": null
    }
]

SpaceNet Vegas

This is a collection of examples using the SpaceNet Vegas dataset.

SpaceNet Vegas Simple Semantic Segmentation

This experiment is a simple example of doing semantic segmentation: the code is simple, there is no need to pre-process any data, and you don't need permission to use the data.

Arguments:

  • raw_uri should be set to the root of the SpaceNet data repository, which is at s3://spacenet-dataset, or a local copy of it. A copy only needs to contain the SpaceNet_Buildings_Dataset_Round2/spacenetV2_Train/AOI_2_Vegas subdirectory.
  • processed_uri should not be set because there is no processed data in this example.

Below are sample predictions and eval metrics.

SpaceNet Vegas Buildings in QGIS

Eval Metrics
[
    {
        "class_id": 1,
        "precision": 0.9166443308607926,
        "recall": 0.7788752910479124,
        "gt_count": 62924777,
        "count_error": 31524.39656560088,
        "class_name": "Building",
        "f1": 0.8387483150445183
    },
    {
        "class_id": 2,
        "precision": 0.9480938442744736,
        "recall": 0.9648479452702291,
        "gt_count": 262400223,
        "count_error": 29476.379317139523,
        "class_name": "Background",
        "f1": 0.9527945047747147
    },
    {
        "class_id": null,
        "precision": 0.942010839223173,
        "recall": 0.9288768769691843,
        "gt_count": 325325000,
        "count_error": 29872.509429032507,
        "class_name": "average",
        "f1": 0.930735545099091
    }
]

SpaceNet Vegas Roads and Buildings: All Tasks

This experiment can be configured to do any of the three tasks on either roads or buildings. It is an example of how to structure experiment code to support a variety of options. It also demonstrates how to utilize line strings as labels for roads using buffering, and generating polygon output for semantic segmentation on buildings.

Arguments:

  • raw_uri should be set to the root of the SpaceNet data repository, which is at s3://spacenet-dataset, or a local copy of it. For buildings, a copy only needs to contain the SpaceNet_Buildings_Dataset_Round2/spacenetV2_Train/AOI_2_Vegas subdirectory. For roads, SpaceNet_Roads_Competition/Train/AOI_2_Vegas_Roads_Train.
  • processed_uri should not be set because there is no processed data in this example.
  • task_type can be set to chip_classification, object_detection, or semantic_segmentation.
  • target can be buildings or roads

Note that for semantic segmentation on buildings, polygon output in the form of GeoJSON files will be saved to the predict directory alongside the GeoTIFF files. In addition, a vector evaluation file using SpaceNet metrics will be saved to the eval directory. Running semantic segmentation on roads trains a Mobilenet for 100k steps which takes about 6hrs on a P3 instance.

Sample predictions and eval metrics can be seen below.

Spacenet Vegas Roads in QGIS

Eval Metrics
[
    {
        "count_error": 131320.3497452814,
        "precision": 0.79827727905979,
        "f1": 0.7733719736453241,
        "class_name": "Road",
        "class_id": 1,
        "recall": 0.7574370618553649,
        "gt_count": 47364639
    },
    {
        "count_error": 213788.03361026093,
        "precision": 0.9557015578601281,
        "f1": 0.909516065847437,
        "class_name": "Background",
        "class_id": 2,
        "recall": 0.8988113906793058,
        "gt_count": 283875361
    },
    {
        "count_error": 201995.82229692052,
        "precision": 0.9331911601569118,
        "f1": 0.8900485625895702,
        "class_name": "average",
        "class_id": null,
        "recall": 0.8785960059171598,
        "gt_count": 331240000
    }
]
Variant: Use vector tiles to get labels

It is possible to use vector tiles as a source of labels, either in z/x/y or .mbtiles format. To use vector tiles instead of GeoJSON, run the experiment with the following argument: -a vector_tile_options "<uri>,<zoom>,<id_field>". See the vector tile docs for a description of these arguments.

To run this example using OSM tiles in .mbtiles format, first create an extract around Las Vegas using:

cd /opt/data
wget https://s3.amazonaws.com/mapbox/osm-qa-tiles-production/latest.country/united_states_of_america.mbtiles.gz
# unzipping takes a few minutes
gunzip united_states_of_america.mbtiles.gz
npm install tilelive-copy
tilelive-copy \
    --minzoom=0 --maxzoom=14 \
    --bounds="-115.43472290039062,35.98689628443789,-114.91836547851562,36.361586786517776" \    united_states_of_america.mbtiles vegas.mbtiles

Using the entire USA file would be very slow. Then run the roads example using something like -a vector_tile_options "/opt/data/vegas.mbtiles,12,@id".

If you are not using OSM tiles, you might need to change the class_id_to_filter values in the experiment configuration. Each class_id_to_filter is a mapping from class_id to a Mapbox GL filter which is to used to assign class ids to features based on their properties field.

SpaceNet Vegas Hyperparameter Search

This experiment set runs several related experiments in which the base learning rate varies over them. These experiments are all related, in that they all work over the same dataset (SpaceNet Vegas buildings), and in fact the analyze and chip stages are shared between all of the experiments. That sharing of early stages is achieving by making sure that the chip_key and analyze_key are the same for all of the experiments so that Raster Vision can detect the redundancy.

Arguments:

  • raw_uri should be set to the root of the SpaceNet data repository, which is at s3://spacenet-dataset, or a local copy of it. A copy only needs to contain the SpaceNet_Buildings_Dataset_Round2/spacenetV2_Train/AOI_2_Vegas subdirectory.
  • processed_uri should not be set because there is no processed data in this example.
  • learning_rates is a comma-delimited list of learning rates to use for the experiments. Example: -a learning_rates '0.0001,0.001,0.002,0.003,0.004,0.005,0.10'

The number of steps is 10,000 for all experiments. Because this is for demonstration purposes only, the training dataset has been reduced to only 128 scenes.

The F1 scores for buildings as a function of base learning rate are shown below.

Base Learning Rate Building F1 Score
0.0001 0.7337961752864327
0.001 0.7616993477580662
0.002 0.7889177881341606
0.003 0.7864549469541627
0.004 0.4194065664072375
0.005 0.5070458576486434
0.1 0.5046626369613472

Disclaimer: We are not claiming that the numbers above are useful or interesting, the sole intent here is demonstrate how to vary hyperparameters using Raster Vision.

ISPRS Potsdam Semantic Segmentation

This experiment performs semantic segmentation on the ISPRS Potsdam dataset. The dataset consists of 5cm aerial imagery over Potsdam, Germany, segmented into six classes including building, tree, low vegetation, impervious, car, and clutter. For more info see our blog post.

Data:

  • The dataset can only be downloaded after filling in this request form. After your request is granted, follow the link to 'POTSDAM 2D LABELING' and download and unzip 4_Ortho_RGBIR.zip, and 5_Labels_for_participants.zip into a directory, and then upload to S3 if desired.

Arguments:

  • raw_uri should contain 4_Ortho_RGBIR and 5_Labels_for_participants subdirectories.
  • processed_uri should be set to a directory which will be used to store test crops.

Below are sample predictions and eval metrics.

Potsdam segmentation predictions

Eval Metrics
[
        {
            "precision": 0.9003686311706696,
            "recall": 0.8951149482868683,
            "f1": 0.8973353554371246,
            "count_error": 129486.40233074076,
            "gt_count": 1746655.0,
            "conf_mat": [
                0.0,
                1563457.0,
                7796.0,
                5679.0,
                10811.0,
                126943.0,
                31969.0
            ],
            "class_id": 1,
            "class_name": "Car"
        },
        {
            "precision": 0.9630047813515502,
            "recall": 0.9427071079228886,
            "f1": 0.9525027991356272,
            "count_error": 1000118.8466519706,
            "gt_count": 28166583.0,
            "conf_mat": [
                0.0,
                6976.0,
                26552838.0,
                743241.0,
                71031.0,
                556772.0,
                235725.0
            ],
            "class_id": 2,
            "class_name": "Building"
        },
        {
            "precision": 0.8466609755403327,
            "recall": 0.8983221897241067,
            "f1": 0.8715991836041085,
            "count_error": 3027173.8852443425,
            "gt_count": 30140893.0,
            "conf_mat": [
                0.0,
                4306.0,
                257258.0,
                27076233.0,
                1405095.0,
                1110647.0,
                287354.0
            ],
            "class_id": 3,
            "class_name": "Low Vegetation"
        },
        {
            "precision": 0.883517319858661,
            "recall": 0.8089167109558072,
            "f1": 0.8439042868078945,
            "count_error": 1882745.6869677808,
            "gt_count": 16928529.0,
            "conf_mat": [
                0.0,
                34522.0,
                157012.0,
                2484523.0,
                13693770.0,
                485790.0,
                72912.0
            ],
            "class_id": 4,
            "class_name": "Tree"
        },
        {
            "precision": 0.9123212945945467,
            "recall": 0.9110533473255575,
            "f1": 0.9115789047144218,
            "count_error": 1785561.1048684688,
            "gt_count": 29352493.0,
            "conf_mat": [
                0.0,
                99015.0,
                451628.0,
                1307686.0,
                262292.0,
                26741687.0,
                490185.0
            ],
            "class_id": 5,
            "class_name": "Impervious"
        },
        {
            "precision": 0.42014399072332975,
            "recall": 0.47418711749488085,
            "f1": 0.44406088467218563,
            "count_error": 787395.6814824425,
            "gt_count": 1664847.0,
            "conf_mat": [
                0.0,
                28642.0,
                157364.0,
                340012.0,
                59034.0,
                290346.0,
                789449.0
            ],
            "class_id": 6,
            "class_name": "Clutter"
        },
        {
            "precision": 0.8949197573420392,
            "recall": 0.8927540185185187,
            "f1": 0.8930493260224918,
            "count_error": 1900291.674768574,
            "gt_count": 108000000.0,
            "conf_mat": [
                [
                    0.0,
                    0.0,
                    0.0,
                    0.0,
                    0.0,
                    0.0,
                    0.0
                ],
                [
                    0.0,
                    1563457.0,
                    7796.0,
                    5679.0,
                    10811.0,
                    126943.0,
                    31969.0
                ],
                [
                    0.0,
                    6976.0,
                    26552838.0,
                    743241.0,
                    71031.0,
                    556772.0,
                    235725.0
                ],
                [
                    0.0,
                    4306.0,
                    257258.0,
                    27076233.0,
                    1405095.0,
                    1110647.0,
                    287354.0
                ],
                [
                    0.0,
                    34522.0,
                    157012.0,
                    2484523.0,
                    13693770.0,
                    485790.0,
                    72912.0
                ],
                [
                    0.0,
                    99015.0,
                    451628.0,
                    1307686.0,
                    262292.0,
                    26741687.0,
                    490185.0
                ],
                [
                    0.0,
                    28642.0,
                    157364.0,
                    340012.0,
                    59034.0,
                    290346.0,
                    789449.0
                ]
            ],
            "class_id": null,
            "class_name": "average"
        }
]

COWC Potsdam Car Object Detection

This experiment performs object detection on cars with the Cars Overhead With Context dataset over Potsdam, Germany.

Data:

  • The imagery can only be downloaded after filling in this request form. After your request is granted, follow the link to 'POTSDAM 2D LABELING' and download and unzip 4_Ortho_RGBIR.zip into a directory, and then upload to S3 if desired. (This step uses the same imagery as ISPRS Potsdam Semantic Segmentation)
  • Download the processed labels and unzip. These files were generated from the COWC car detection dataset using scripts in cowc.data. TODO: make a Jupyter notebook showing how to process the raw labels from scratch.

Arguments:

  • raw_uri should point to the imagery directory created above, and should contain the 4_Ortho_RGBIR subdirectory.
  • processed_uri should point to the labels directory created above. It should contain the labels/all subdirectory.

Below are sample predictions and eval metrics.

COWC Potsdam predictions

Eval Metrics
    {
        "precision": 0.9390652367984924,
        "recall": 0.9524752475247524,
        "f1": 0.945173902480464,
        "count_error": 0.015841584158415842,
        "gt_count": 505.0,
        "class_id": 1,
        "class_name": "vehicle"
    },
    {
        "precision": 0.9390652367984924,
        "recall": 0.9524752475247524,
        "f1": 0.945173902480464,
        "count_error": 0.015841584158415842,
        "gt_count": 505.0,
        "class_id": null,
        "class_name": "average"
    }

xView Vehicle Object Detection

This experiment performs object detection to find vehicles using the DIUx xView Detection Challenge dataset.

Data:

Arguments:

  • The raw_uri should point to the directory created above, and contain a labels GeoJSON file named xView_train.geojson, and a directory named train_images.
  • The processed_uri should point to the processed data generated by the notebook.

Below are sample predictions and eval metrics.

xView predictions

Eval Metrics
{
    "class_name": "vehicle",
    "precision": 0.4789625193065175,
    "class_id": 1,
    "f1": 0.4036499117825103,
    "recall": 0.3597840599059615,
    "count_error": -0.2613920009287745,
    "gt_count": 17227
},
{
    "class_name": "average",
    "precision": 0.4789625193065175,
    "class_id": null,
    "f1": 0.4036499117825103,
    "recall": 0.3597840599059615,
    "count_error": -0.2613920009287745,
    "gt_count": 17227
}

Model Zoo

Using the Model Zoo, you can download prediction packages which contain pre-trained models and configuration, and then run them on sample test images that the model wasn't trained on.

> rastervision predict <predict_package> <infile> <outfile>

Note that the input file is assumed to have the same channel order and statistics as the images the model was trained on. See rastervision predict --help to see options for manually overriding these. It shouldn't take more than a minute on a CPU to make predictions for each sample. For some of the examples, there are also model files that can be used for fine-tuning on another dataset.

Disclaimer: These models are provided for testing and demonstration purposes and aren't particularly accurate. As is usually the case for deep learning models, the accuracy drops greatly when used on input that is outside the training distribution. In other words, a model trained in one city probably won't work well in another city (unless they are very similar) or at a different imagery resolution.

PyTorch Models

For the PyTorch models, the prediction package (when unzipped) contains a model file which can be used for fine-tuning.

Dataset Task Model Prediction Package Sample Image
SpaceNet Rio Buildings Chip Classification Resnet50 link link
SpaceNet Vegas Buildings Semantic Segmentation DeeplabV3/Resnet50 link link
SpaceNet Vegas Roads Semantic Segmentation DeeplabV3/Resnet50 link link
ISPRS Potsdam Semantic Segmentation DeeplabV3/Resnet50 link link
COWC Potsdam (Cars) Object Detection Faster-RCNN/Resnet18 link link

Tensorflow Models

Dataset Task Model Prediction Package Sample Image Model (for fine-tuning)
SpaceNet Rio Buildings Chip Classification Resnet50 link link link
SpaceNet Vegas Buildings Semantic Segmentation Mobilenet link link n/a
SpaceNet Vegas Roads Semantic Segmentation Mobilenet link link link
ISPRS Potsdam Semantic Segmentation Mobilenet link link link
COWC Potsdam (Cars) Object Detection Mobilenet link link n/a
xView Vehicle Object Detection Mobilenet link link n/a

More Repositories

1

raster-vision

An open source library and framework for deep learning on satellite and aerial imagery.
Python
1,966
star
2

Open-Data-Catalog

Open Data Catalog is an open data catalog based on Django, Python and PostgreSQL. It was originally developed for OpenDataPhilly.org, a portal that provides access to open data sets, applications, and APIs related to the Philadelphia region. The Open Data Catalog is a generalized version of the original source code with a simple skin. It is intended to display information and links to publicly available data in an easily searchable format. The code also includes options for data owners to submit data for consideration and for registered public users to nominate a type of data they would like to see openly available to the public.
Python
238
star
3

loam

Javascript wrapper for GDAL in the browser
JavaScript
204
star
4

django-queryset-csv

a CSV exporter for django querysets
Python
187
star
5

tilegarden

Serverless raster and vector map tile generation using Mapnik and AWS Lambda
JavaScript
96
star
6

django-amazon-ses

A Django email backend that uses Boto3 to interact with Amazon Simple Email Service (SES).
Python
84
star
7

osmesa

OSMesa is an OpenStreetMap processing stack based on GeoTrellis and Apache Spark
Scala
79
star
8

terraform-aws-ecs-cluster

A Terraform module to create an Amazon Web Services (AWS) EC2 Container Service (ECS) cluster.
HCL
78
star
9

franklin

A STAC/OGC API Features Web Service
Scala
71
star
10

ansible-spark

An Ansible role for installing Apache Spark.
Shell
58
star
11

terraform-aws-acm-certificate

A Terraform module to create an Amazon Certificate Manager (ACM) certificate with Route 53 DNS validation.
HCL
48
star
12

geowave-geomesa-comparative-analysis

This repository will host files and issues around the comparative analysis of GeoWave and GeoMesa
Jupyter Notebook
45
star
13

texturemap

Textures, patterns, and shapes that make web maps work for people with colorblindness. Built for Mapbox GL and MapLibre GL.
CSS
41
star
14

pfb-network-connectivity

PFB Bicycle Network Connectivity
Python
40
star
15

terraform-aws-emr-cluster

A Terraform module to create an Amazon Web Services (AWS) Elastic MapReduce (EMR) cluster.
HCL
39
star
16

python-omgeo

OMGeocoder - A python geocoding abstraction layer
Python
36
star
17

lambda-geotrellis-tile-server

Serve tiles serverlessly using geotrellis
Shell
34
star
18

terraform-aws-cross-account-role

A Terraform module to create an IAM Role for Cross Account delegation.
HCL
33
star
19

terraform-aws-redis-elasticache

A Terraform module to create an Amazon Web Services (AWS) Redis ElastiCache cluster.
HCL
33
star
20

raster-vision-qgis

QGIS 3 Plugin for Raster Vision (no longer maintained)
Python
32
star
21

terraform-aws-vpc

A Terraform module to create an Amazon Web Services (AWS) Virtual Private Cloud (VPC).
HCL
29
star
22

docker-django

Base Docker image for Django and Gunicorn.
Shell
28
star
23

python-project-template

Azavea Data Analytics team template for Data Science projects
Dockerfile
28
star
24

Leaflet.zoomdisplay

A plugin to show the current zoom level of a Leaflet map
JavaScript
27
star
25

terraform-aws-ecs-web-service

A Terraform module to create an Amazon Web Services (AWS) EC2 Container Service (ECS) service associated with an Application Load Balancer (ALB).
HCL
26
star
26

django-ecsmanage

Run any Django management command on an AWS Elastic Container Service (ECS) cluster.
Python
24
star
27

python-sld

A simple python library that enables dynamic SLD creation and manipulation.
Python
24
star
28

bus-plan

BusPlan: An open source effort to help students ride less time to get to school on time, and save our public school systems money through optimizations to their bus schedules.
Java
24
star
29

open-cities-ai-challenge-benchmark-model

Benchmark model for DrivenData Open Cities AI Challenge
Python
23
star
30

isprs-potsdam-viz

Viewer for Azavea's work on the ISPRS Potsdam image segmentation contest
JavaScript
21
star
31

mask-to-polygons

Routines for extracting and working with polygons from semantic segmentation masks
Python
21
star
32

ansible-papertrail

An ansible role for installing Papertrail
20
star
33

acs-alchemist

ACS Alchemist is a tool that can help you extract specific portions of the American Community Survey (ACS) in Shapefile format.
PLpgSQL
20
star
34

ansible-pip

An Ansible role for installing pip.
Python
19
star
35

raster-vision-aws

A CloudFormation template for deploying Raster Vision Batch jobs to AWS.
Makefile
17
star
36

react-leaflet-demo

Sample code to illustrate a blog post about using React and Leaflet
17
star
37

ansible-golang

An Ansible role for installing the Go programming language.
16
star
38

geo-data

This repository contains geographic data created by Azavea
HTML
15
star
39

cac-tripplanner

Clean Air Council Circuit Trip Planner and Travelshed
JavaScript
15
star
40

ansible-terraform

An Ansible role for installing Terraform.
Python
14
star
41

nyc-trees

NYC Parks Trees Count! 2015 tree census
Python
14
star
42

vagrant-cartodb

Ansible role to build a multi-machine vagrant setup for CartoDB
Ruby
13
star
43

ios-draggable-annotations-demo

Example Xcode projects showing how to build a custom, draggable MKAnnotationView
Objective-C
13
star
44

geotrellis-geomesa-template-project

Tutorial with Spark, GeoTrellis and GeoMesa examples
Shell
13
star
45

grout

Formerly Ashlar. New repo rather than rename to preserve backwards compatibility
Python
13
star
46

ckanext-odp_theme

OpenDataPhilly CKAN customizations
HTML
13
star
47

open-data-standards

A living document of existing open data standards and proposed new standards
CSS
13
star
48

geotrellis-collections-api-research

A research project to investigate using GeoTrellis as a REST service
JavaScript
13
star
49

ansible-celery

An Ansible role for installing Celery.
12
star
50

hot-osm-population

Estimate OSM building coverage completeness by comparing vs WorldPop raster
Scala
12
star
51

climate-change-risk-analysis

Weighted overlay of climate change risk factors
Jupyter Notebook
11
star
52

noaa-hydro-data

NOAA Phase 2 Hydrological Data Processing
Jupyter Notebook
11
star
53

transit-analyst

An interactive tool for exploring transit accessibility to target resources from focus areas.
JavaScript
11
star
54

docker-openjdk-gdal

Base Docker image for GDAL with Java bindings.
Shell
10
star
55

ansible-postgresql

An Ansible role for installing PostgreSQL.
10
star
56

opendataphilly-jkan

OpenDataPhilly powered by JKAN
HTML
10
star
57

ansible-kibana

An Ansible role for installing Kibana.
Ruby
9
star
58

mos-energy-benchmark

JavaScript
9
star
59

nasa-hyperspectral

An event-driven image processing pipeline for developing our foundational capability to work with HSI data sources.
Jupyter Notebook
9
star
60

onramp

Generate OSM augmented diffs from OSM change files without Overpass
Python
9
star
61

ansible-elasticsearch

An Ansible role for installing ElasticSearch.
Ruby
8
star
62

django-sld

A django library that uses python-sld to generate SLDs, based on django models.
Python
8
star
63

ansible-collectd

An Ansible role for installing Collectd.
Ruby
8
star
64

cloud-buster

Sentinel-2 L1C and L2A Imagery with Fewer Clouds
Python
8
star
65

tilejson.io

A simple way to view, share and compare map layers.
JavaScript
8
star
66

raster-vision-fastai-plugin

PyTorch/fastai backend plugin for Raster Vision
Python
8
star
67

minifier

Merges, compresses, and lints your javascript (and css) files.
JavaScript
7
star
68

iow-boundary-tool

A tool for drawing water utility service area boundaries
JavaScript
7
star
69

noaa-flood-mapping

NOAA Flood Inundation Mapping
Python
7
star
70

terraform-aws-ecr-repository

A Terraform module to create an Amazon Web Services (AWS) Elastic Container Registry (ECR) repository.
HCL
6
star
71

osmesa-stat-server

temporary home of statistics server for processed OSM data
Shell
6
star
72

Leaflet.favorDoubleClick

Leaflet plugin to favor double-click events over single-click events
JavaScript
6
star
73

simple-raster-processing

Research into creating a web request raster processing pipeline using open source python tools
Python
6
star
74

hiveless

Scala API for Hive UDFs with the GIS extension
Scala
6
star
75

tilertwo

Publish static vector tile sets to S3 from GeoJSON with a single command
Python
6
star
76

terraform-aws-memcached-elasticache

A Terraform module to create an Amazon Web Services (AWS) Memcached ElastiCache cluster.
HCL
6
star
77

docker-terraform

A base Docker image for Terraform.
Shell
6
star
78

scala-landsat-util

Scala client for Developmentseed's landsat-api
Scala
6
star
79

cloud-model

Cloud Detection Model for Sentinel-2 Imagery (see https://registry.opendata.aws/sentinel-2/)
Python
5
star
80

ansible-gunicorn

An Ansible role for installing and configuring gunicorn.
5
star
81

ansible-aws-ena

An Ansible role for installing AWS Elastic Networking Adapter (ENA) drivers.
Python
5
star
82

docker-flask

Base Docker image for Flask and Gunicorn.
Shell
5
star
83

ansible-docker

An Ansible role for installing Docker.
5
star
84

docker-postgis

Docker image for PostGIS
Shell
5
star
85

fastdao

An object-relational mapper for the .NET framework
Shell
5
star
86

raster-vision-tile2vec

Raster Vision plugin for tile2vec (https://arxiv.org/abs/1805.02855)
Python
5
star
87

foss4g-groundwork-hitl-infra

Infrastructure configuration for the GroundWork + Raster Vision human-in-the-loop workflow for FOSS4G 2021
Jupyter Notebook
5
star
88

azavea.g8

A Giter8 template for bootstrapping Scala projects at Azavea.
Scala
5
star
89

scaliper

Scaliper is a scala microbenchmarking toolkit based on Google Caliper.
Scala
5
star
90

climate-change-components

Angular components for use with the Climate Change API
TypeScript
5
star
91

gtfs-feed-fetcher

Fetch and validate transit feeds
Python
5
star
92

osm-analytics

generating statistics for open street map
Python
5
star
93

usace-flood-geoprocessing

Related to usace flood model viewer https://github.com/azavea/usace-flood-model
Scala
5
star
94

svg-to-chakra-icon

Website for quickly converting SVG icon files into Chakra UI Icon components
JavaScript
4
star
95

terraform-aws-cloudtrail

A Terraform module to create an Amazon Web Services (AWS) CloudTrail Trail.
HCL
4
star
96

blend

Merge, analyze, and optimize client-side assets for web applications and static web sites.
JavaScript
4
star
97

ansible-hdfs

An Ansible role for installing Cloudera HDFS.
Ruby
4
star
98

coffee-button

An Amazon Lambda function that publishes messages to a Slack channel when an Amazon IoT button is pressed.
Python
4
star
99

geotensorflow

Shell
4
star
100

ansible-rds-ca-bundle

An Ansible role for installing the Amazon Relational Database Service (RDS) certificate bundle.
Ruby
4
star