Laika
Notes and experiments with satellite image data.
Synopsis
The goal of this repo is to research potential sources of satellite image data and to implement various algorithms for satellite image segmentation.
Table of contents
- Running the code
- Background research
Running the code
The following steps describe the end-to-end flow for the current work in progress. The implementation makes use of a utility to help build a training dataset, and a SegNet encoder/decoder network for image segmentation.
Creating a training dataset
Install some os deps:
brew install mapnik
brew install paralell
Clone and install the skynet-data project:
git clone https://github.com/developmentseed/skynet-data
cd skynet-data
The skynet-data project is a tool for sampling OSM QA tiles and associated satelite image tiles from MapBox.
The first task is to decide what classes to include in the dataset. These are specified in a JSON configuration file and follow the osm tag format. This project attempts to identify 6 types of land use and objects:
- residential
- commercial
- industrial
- vegetation (A hierarchical filter including woodland, trees, scrub, grass etc.)
- buildings Note the extensive list of building types.
- brownfield
cd
into classes
and create a new configuration mine.json
:
[{
"name": "residential",
"color": "#010101",
"stroke-width": "1",
"filter": "[landuse] = 'residential'",
"sourceLayer": "osm"
}, {
"name": "commercial",
"color": "#020202",
"stroke-width": "1",
"filter": "[landuse] = 'commercial'",
"sourceLayer": "osm"
}, {
"name": "industrial",
"color": "#030303",
"stroke-width": "1",
"filter": "[landuse] = 'industrial'",
"sourceLayer": "osm"
}, {
"name": "vegetation",
"color": "#040404",
"stroke-width": "1",
"filter": "([natural] = 'wood') or
([landuse] = 'forest') or
([landuse] = 'tree_row') or
([landuse] = 'tree') or
([landuse] = 'scrub') or
([landuse] = 'heath') or
([landuse] = 'grassland') or
([landuse] = 'orchard') or
([landuse] = 'farmland') or
([landuse] = 'tree') or
([landuse] = 'allotments') or
([surface] = 'grass') or
([landuse] = 'meadow') or
([landuse] = 'vineyard')",
"sourceLayer": "osm"
},
{
"name": "building",
"color": "#050505",
"stroke-width": "1",
"filter": "[building].match('.+')",
"sourceLayer": "osm"
},
{
"name": "brownfield",
"color": "#060606",
"stroke-width": "1",
"filter": "[landuse] = 'brownfield'",
"sourceLayer": "osm"
}]
The skynet-data tool will use this configuration to create ground-truth labels for the specified classes. For each satelite image instance, it's pixel-by-pixel ground-truth will be encoded as an image with the same size as the satelite image. An individual class will be encoded by colour, such that a specific pixel belonging to an individual class will assume one of 7 colour values corresponding to the above configuration.
For example, a pixel belonging to the vegetation class will assume the RGB
colour #040404
and a building will assume the value #050505
. Note that these
can be any RGB colour. For convenience, I have chosen to encode the class number
in each of the 3 RGB bytes so that it can be easily retrieved later on without
the need for a lookup table.
Note that it is possible for a pixel to assume an unknown class in which
case, it can be considered as "background". Thus Unknown pixels have been
encoded as #000000
(the 7th class).
Next, in the skynet-data
parent directory, add the following to the
Makefile
:
QA_TILES?=united_kingdom
BBOX?='-3.3843,51.2437,-2.3923,51.848'
IMAGE_TILES?="tilejson+https://a.tiles.mapbox.com/v4/mapbox.satellite.json?access_token=$(MapboxAccessToken)"
TRAIN_SIZE?=10000
CLASSES?=classes/mine.json
ZOOM_LEVEL?=17
This will instruct the proceeding steps to download 10,000 images from within a bounding box (defined as part of the South-west here). The images will be randomly sampled within the bounding box area. Zoom level 17 corresponds to approx. 1m per pixel resolution. To specify the bounding box area, this tool is quite handy. Note that coordinates are specified in the following form:
-lon, -lat, +lon, +lat
Before following the next steps, go to MapBox and sign up for a developer key.
Having obtained your developer key from MapBox, store it in an env. variable:
export MapboxAccessToken="my_secret_token"
Then initiate the download process:
make clean
rm -f data/all_tiles.txt
make download-osm-tiles
make data/all_tiles.txt
make data/sample.txt
make data/labels/color
make data/images
You will end up with 10,000 images in data/images
and 10,000 "ground truth"
images in data/labels
. data/sample-filtered.txt
contains a list of files of
which at least 1 pixel belongs to a specified class.
Note, that there is in a convenience tool in the skynet-data utility for quickly
viewing the downloaded data. To use it, first install a local webserver, e.g.,
Nginx and add an alias to the preview.html
file. You
can then visualise the sampled tiles by following a URL of the following form:
http://localhost:8080/preview.html?accessToken=MAPBOX_KEY&prefix=data
See notebooks for a visual inspection of some of the data. The following shows some of the downloaded tiles with overlaid OSM labels:
The level of detail can be quite fine in places, while in others, quite sparse. This example shows a mix of industrial (yellow) and commercial (blue) land areas mixed in with buildings (red) and vegetation (green).
The model
The model implemented here is the SegNet encoder/decoder architecture. There are 2 variations of this architecure, of which the simplified version has been implemented here. See paper for details. Briefly, the architecture is suited for multi-class pixel-by-pixel segmentation and has been shown to be effective in scene understanding tasks. Given this. it may also be suited to segmentation of satelite imagery.
Side note: The architecutre has been shown to be very effective at segmenting images from car dashboard cameras, and is of immediate interest in our street -view related research.
The model, specified in model.py, consists of 2 main components. The first is an encoder which takes as input a 256x256 RGB image and compresses the image into a set of features. In fact, this component is the same as a VGG16 network without the final fully connected layer. In place of the final fully connected layer, the encoder is connected to a decoder. This decoder is a reverse image of the encoder, and acts to up-sample the features.
The final output of the model is a N*p matrix, where p = 256*256 corresponding to the original number of image pixels and N = the number of segment classes. As such, each pixel has an associated class probability vector. The predicted segment/class can be extrcacted by taking the the max of these values.
Training
First install numpy
, theano
, keras
and opencv2
Then:
python3 train.py
train.py will use the training data created with skynet in the
previous step. Note that by default, train.py expects to find this data in
../skynet-data/data
. Having loaded the raw training data and associated
segment labels into a numpy array, the data are stored in
HDF5 format in
data/training.hdf5
. On subsequent runs, the data loader will first
look for this HDF5 data as to reduce the startup time. Note that the
data/training.hdf5
can be used by other models/frameworks/languages.
In current form, all parameters are hard-coded. These are the default parameters:
Parameter | Default | Note |
---|---|---|
validation | 0.2 | % of dataset to use as training validation subset |
epochs ! 10 | number of training epochs | |
learning_rate | 0.001 | learning rate |
momentum | 0.9 | momentum |
As-is, the model converges at a slow rate:
Training and validation errors (loss and accuracy) are stored in
training_log.csv
. On completion, the network weights are dumped into
weights.hdf5
- Note that this may be loaded into the same model implemented in
another language/framework.
Validating
Having trained the model, validate it using the testing data held back in the data-preparation stage:
python3 validate.py
validate.py expects to find the trained model weights in
weights.hdf5 in the current working directory. In addition to printing out the
validation results, the pixel-by-pixel class probabilities for each instance
are stored in predictions.hdf5
which can be inspected to debug the model.
Running
feed_forward.py takes as input trained weights, an input image and an output directory to produce pixel-by-pixel class predictions.
python3 feed_forward.py <hdf5 weights> <img location> <output dir>
Specifically, given an input satellite image, the script outputs the number of
pixels belonging to one the 8 land-use classes, such that the sum of class
pixels = total number of pixels in the image. In addition, the script will
output class heatmaps and class segments visualisations in to the
<output dir>
.
The class heatmaps (1 image per class) show the model's confidence that a pixel belongs to a particular class (buildings and vegetation shown above). Given this confidence, the maximum value from each class is used to determine the final pixel segment, shown on the right in the above image. Some more visualisations can be found in this notebook.
Further work/notes
- The model as-is, is quite poor, trained to only 70% accuracy over the validation set).
- The model has only been trained once: fine-tuning and hyperparameter search has not yet been completed.
- The training data is very noisy: the segments are only partially labelled. As such, missing labels are assigned as "background".
Background research
Satellite themes of interest
In general, satellite image processing themes can be categorised into two main themes:
Earth Observation (EO)
The field of Earth observation is concerned with monitoring the status of the planet with various sensors, which includes, but is not limited to, the use of satellite data for monitoring large areas of the earth's surface at regular, frequent intervals.
EO is a broad area, which may cover water management, forestry, agriculture, urban fabric and land-use/cover in general.
A good example of EO is the use of the normalised difference vegetation index (NDVI) for monitoring nationwide vegetation levels.
This sub-field does not depend on very high resolution data since it is mainly concerned with quantifying some aspect of very large areas of land.
Object detection
In contrast to EO, the field of object detection from satellite data is principally concerned with the localisation of specific objects as opposed to general land-use cover.
A good example of Object detection from satellite data is counting cars in carparks from which various indicators can be derived, depending on the context. For example, it would be possible to derive some form of consumer/retail indicator by periodically counting cars in super market car parks.
This sub-field will depend on very high resolution data depending on the application.
Computer vision themes of interest
In addition to the two main satellite image processing themes of interest (EO and object detection), there are four more general image processing sub-fields which may be applicable to a problem domain. From "easiest" to most difficult:
-
Classification. At the lowest level, the task is to identify which objects are present in a scene. If there are many objects, the output may be an ordered list sorted by amount or likelyhood of the object being present in the scene. The classification may also extend beyond objects to abstract concepts such as aesthetic value.
-
Detection. The next level involves localisation of the entities/concepts in the scene. Typically this will include a bounding-box around identified objects, and/or object centroids.
-
Segmentation. This level extends classification and detection to include pixel-by-pixel class labeling. Each pixel in the scene must be labeled with a particular class, such that the entire scene can be described. Segmentation is particularly appropriate for land-use cover. In addition, segmentaiton may be extended to provide a form of augmented bounding-box: pixels outside of the bounding box area can be negatively weighted, pixels on the border +1 and pixels inside the region [0, 1] inversely proportionate to the distance form the bounding box perimeter.
-
Instance segmentation. Perhaps the most challenging theme: In addition to pixel-by-pixel segmentation, provide a segmented object hierarchy, such that objects/areas belonging to the same class may be individually segmented. E.g., segments for cars and car models. Commercial area, office within a commercial area, roof-top belonging to a shop etc.
The initial version of this project focuses on option 3: image segmentation in the domain of both Earth Observation and object detection.
Data sources
There are three types of data of interest for this project.
-
Raw image data. There are numerous sources for satellite image data, ranging from lower resolution (open) data most suited for EO applications, through to high resolution (mostly propitiatory) data-sets.
-
Pre-labeled image data. For training an image classificiation, object detection or image segmentation supervised learning model, it is necessary to obtain ample training instances along with associated ground truth labels. In the domain of general image classification, there exist plenty of datasets which are mainly used to benchmark various algorithms.
-
Image labels. It will later be required to create training datasets with arbitrary labeled instances. For this reason, a source of ground-truth and/or a set of tools to facilitate image labeling and manual image segmentation will be necessary.
Raw image data
This project (to date) focuses specifically on open data. The 2 main data sources for EO grade images come from the Sentinel-2 and Landsat-8 satellites. Both satellites host a payload of multi-spectrum imaging equipment.
Sentinel 2 (ESA)
The Sentinel-2 satellite is capable of sensing the following wavelengths:
Band | Wavelength (ΞΌm) | Resolution (m) |
---|---|---|
01 β Coastal aerosol | 0.443 | 60 |
02 β Blue | 0.490 | 10 |
03 β Green | 0.560 | 10 |
04 β Red | 0.665 | 10 |
05 β Vegetation Red Edge | 0.705 | 20 |
06 β Vegetation Red Edge | 0.740 | 20 |
07 β Vegetation Red Edge | 0.783 | 20 |
08 β NIR | 0.842 | 10 |
8A β Narrow NIR | 0.865 | 20 |
09 β Water vapour | 0.945 | 60 |
10 β SWIR β Cirrus | 1.375 | 60 |
11 β SWIR | 1.610 | 20 |
12 β SWIR | 2.190 | 20 |
The visible spectrum captured by Sentinel-2 is the highest (open) data resolution available: 10 metres per pixel. Observations are frequent: Every 5 days for the same viewing angle.
Landsat-8 (NASA)
The Landsat-8 satellite is limited to 30m resolution accross all wavelengths with the exception of it's panchromatic sensor, which is capable of capturing 15m per pixel data. The revisit frequency for landsat is 16 days.
Band | Wavelength (ΞΌm) | Resolution (m) |
---|---|---|
01 - Coastal / Aerosol | 0.433 β 0.453 | 30 |
02 - Blue | 0.450 β 0.515 | 30 |
03 - Green | 0.525 β 0.600 | 30 |
04 - Red | 0.630 β 0.680 | 30 |
05 - Near Infrared | 0.845 β 0.885 | 30 |
06 - Short Wavelength Infrared | 1.560 β 1.660 | 30 |
07 - Short Wavelength Infrared | 2.100 β 2.300 | 30 |
08 - Panchromatic | 0.500 β 0.680 | 15 |
09 - Cirrus | 1.360 β 1.390 | 30 |
Links
-
sentinel-playground - This is a nice demo showing the available sentinel-2 bands.
-
Copernicus Open Access Hub - The Copernicus Open Access Hub (previously known as Sentinels Scientific Data Hub) provides complete, free and open access to Sentinel-1, Sentinel-2 and Sentinel-3.
-
Satellite Applications Catapult - Data discovery hub - Very nice. Lists pretty much everything.
-
Satellite Applications Catapult - Sentinel Data Access Service (SEDAS) - (API) portal enabling end-users to search and download data from Sentinel 1 & 2 satellites. It aims to lower barriers to entry and create the foundations for an integrated approach to satellite data access.
-
Google - 25k per day... :) - would also be easy to build testing tool from this.. Note that this data is < 1m resolution, so perfectly suitable for object detection.
-
Earth on AWS - Lots of data sources (inc. sentinel and landsat) + platform for large-scale processing.
Papers
Other
-
Terrapattern - alpha version of Terrapattern: "similar-image search" for satellite photos. They use a ResNet. This is truely awesome.
-
TEP Urban platform - Thematic Apps - Lots of things. Urban density / GUF. Derived from sat data.
-
Github satellite-imagery view - Good starting point.
-
Awesome Sentinel - Github - Sat data tools/utils/visualisations etc.
Pre-labeled image data
There are numerous sources of pre-labeled image data available. Recently, there have been a number of satellite image related competitions hosted on Kaggle and TopCoder. This data may be useful to augment an existing dataset, to pre-train models or to train a model for later use in an ensemble.
-
Ships in Satellite Imagery - Kaggle - From Open California dataset.
-
2D Semantic Labeling - Vaihingen data - This is from a true orthophotographic survey (from a plane). The data is 9cm resolution!
-
SAT-4 and SAT-6 airborne datasets - Very high res (1m) whole of US. 65 Terabytes. (DeepSat) 1m/px. (2015). SAT-4: barren land, trees, grassland, other. SAT-6: barren land, trees, grassland, roads, buildings, water bodies. (RGB + NIR)
Object and land-use labels
There exist a number of land-use and land-cover datasets. The main issue is dataset age: If the objective is to identify construction sites or urban sprawl then a dataset > 1 year is next to useless, unless it can be matched to imagery from the same time period which would then only be useful for the creation of a training dataset.
The most promising source (imo) is the OpenStreetMap project since it is updated constantly and contains an extensive hierarchy of relational objects. There is also the possibility to contribute back to the OMS project should manual labeling be necessary.
-
UC Merced Land Use Dataset - 2100 256x256, 1m/px aerial RGB images over 21 land use classes. US specific. (2010)
-
Open street map landuse - OSM landuse visualised in this tool. Some studies have used this in combo. with google sat. images.
-
European Environment Agency - Urban Atlas - Land cover data for Large Urban Zones with more than 100.000 inhabitants.
Modeling
The proof of concept in this project makes use of concepts from deep learning. A review of the current state of the art, covering papers, articles, competitive data science platform results and open source projects indicates that the most recent advances have been in the area of image segmentation - most likely fueled by research trends in autonomous driving..
From the image segmentation domain, the top performing, most recent developments tend to be some form of encoder/decoder neural network in which the outputs of a standard CNN topology (E.g., VGG16) are upsampled to form class probability matrices with the same dimensions as the original input image.
The following papers and resources provide a good overview of the field.
Modeling Papers
Satelite specific
2017
-
Urban environments from satellite imagery - Github - Comparing urban environments using satellite imagery and convolutional neural nets. This is a very nice paper - lots of references to datasets. The focus is on open data. They use google maps api + Urban Atlas for ground truth. Note: Urban Atlas last updated in 2010. They do not do pixel-wise classification, instead predict the classes inside the tile.
-
Deep Learning Based Large-Scale Automatic Satellite Crosswalk Classification - Zebra crossing identification using Google sat image data + Open street map zebra crossing locations. Associated code here.
2016
-
Classification and Segmentation of Satellite Orthoimagery Using Convolutional Neural Networks - Per pixel classification of vegetation, ground, roads, buildings and water. Per pixel is done in a sliding window: the pixel to be classified is the centre pixel of the window. The window has a number of dimensions (typically r, g, b) correspsonding to the wavelengths available from the satellite. Eg., may have an extra channel for near infrared. Standard CNN used: stacked conv-layers, fc layer and finally a softmax classifiier. ReLUs have been used in conv-layer. The fc layer used here is actually a 1000 hidden unit denoising auto-encoder. Has a post processing step: pixel-by-pixel classifications are likely to involve noise - e.g., a single pixel classified as an industrial area in the middle of a field/ "salt and pepper misclassifications" SLIC and other averaging has been used here. Interestingly, paper mentions using DBSCAN - a form of density clustering to solve this.
-
Semantic Segmentation of Small Objects and Modeling of Uncertainty in Urban Remote Sensing Images Using Deep Convolutional Neural Networks - Uses 9cm ultra high res. data from ISPRS. They use the common F1 score to eval results.
-
First Experience with Sentinel-2 Data for Crop and Tree Species Classifications in Central Europe - Random Forests.
-
A Direct and Fast Methodology for Ship Recognition in Sentinel-2 Multispectral Imagery
-
Benchmarking Deep Learning Frameworks for the Classification of Very High Resolution Satellite Multispectral Data - uses the SAT-4 and SAT-6 airborne datasets.
-
Forecasting Vegetation Health at High Spatial Resolution - Github - Tool to produce short-term forecasts of vegetation health at high spatial resolution, using open source software and NASA satellite data that are global in coverage.
-
Automatic Building Extraction in Aerial Scenes Using Convolutional Networks - Describes a nice labelling technique: "Signed Distance Transform". +ve inside region, 0 at boundary and -ve outside. As described here.
2015
-
Marker-Controlled Watershed-Based Segmentation of Multiresolution Remote Sensing Images
-
Unsupervised Deep Feature Extraction for Remote Sensing Image Classification
2014
2013
-
Geographic Object-Based Image Analysis β Towards a new paradigm
-
Hyperspectral Remote Sensing Data Analysis and Future Challenges
Modeling specific
2017
-
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
-
Pyramid Scene Parsing Network. Bleeding edge.
2016
2015
-
Fully Convolutional Networks for Semantic Segmentation - ***
-
U-Net: Convolutional Networks for Biomedical Image Segmentation - Used for 2nd place in DSTL challenge.
Model implementations
There exist a number of implementations of recent satelite image procesing techniques. The following Github respositories are a good research starting point:
-
Github - DeepOSM - *** Train a deep learning net with OpenStreetMap features and satellite imagery.. Data is combo of osm classifications + NAIP data (National Agriculture Imagery Program) / us specific. TensorFlow. Predict if the center 9px of a 64px tile contains road.
-
Github - Deep networks for Earth Observation - Pretrained Caffe SegNet models trained on ISPRS Vaihingen dataset and ISPRS Potsdam datasets. Uses OSM tiles. SegNet = all pixels. Note, there is also a probabilistic extension to segnet. SegNet has also been used with GSV imagery..
-
Github - Raster Vision - Deep learning for aerial/satellite imagery. ResNet50, Inception v3, DenseNet121, DenseNet169 models. Uses Keras and Tensorflow.
-
Github - Deep Learning Tutorial for Kaggle Ultrasound Nerve Segmentation competition, using Keras - This monster was used in 1tst place solution to DSTL Kaggle satellite challenge. (U-net)
-
Predicting Poverty and Developmental Statistics from Satellite Images using Multi-task Deep Learning - Github - Keras, Google sat. images, India cencus data.
-
Using convolutional neural networks to pre-classify images for the humanitarian openstreetmap team (HOT & mapgive) - Donated MapGive data + Openstreetmap. Caffe, SegNet.
-
OSMDeepOD - OSM and Deep Learning based Object Detection from Aerial Imagery - Object detection from aerial imagery using open data from OpenStreetMap. - Not sure where images come from. TensorFlow, pretrained Inception V3.
-
ssai-cnn - Semantic Segmentation for Aerial / Satellite Images with Convolutional Neural Networks - uses Massachusetts road & building dataset. Implementation of CNN, based on methods in this paper
-
SpaceNetChallenge models - Github - Winning solutions to spacenet building detection challenge.
-
random forests Java :)
-
Multi-scale context aggregation by dilated convolutions (Tensorflow)
-
Instance-aware semantic segmentation via multi- task network cascades (Caffe)
-
Fully Convolutional Network (FCN) + Gradient Boosting Decision Tree (GBDT) (Keras)
-
FCN (Keras)
Image segmentation model implementations
General
SegNet
Keras implementations:
(note these are all the basic version from the approach described in the paper.
- segnet 95 stars.
- keras-segnet 65 stars.
- [SegNet-Basic])https://github.com/0bserver07/Keras-SegNet-Basic) 30 stars
- image-segmentation-keras 10 stars. Also some other's (u-net, fcn etc.)
- keras-SegNet 1 star
U-Net
Keras implementations:
- ultrasound-nerve-segmentation 516 stars. From Kaggle.
- another ultrasound-nerve-segmentation 100 stars. Kaggle.
- u-net 90 stars.
- ZF_UNET_224_Pretrained_Model 61 stars. *** used for 2nd place in DSTL challenge.
- image-segmentation-keras 10 stars. Same as above.
- mobile-semantic-segmentation 6 stars.
DeepLab
Note, there seems to be no Keras implementation. Tensorflow implementations:
- tensorflow-deeplab-resnet 480 stars.
Dilated Convolutions
- segmentation_keras 153 stars.
PSPNet (Pyramid Scene Parsing Network)
Bleeding edge. More details here
- PSPNet-Keras-tensorflow 67 stars.
Comeptitive data science
Lots of interesting reading and projects.
DSTL - Kaggle
Understaning Amazon from space - Kaggle
- 11 convolutional neural networks - This is pretty nice, details winners' end-to-end architecture.
Current competitions
- 13/11/17 - 28/02/18 SpaceNet challenge round 3 - TopCoder. Extract navigable road networks that represent roads from satellite images.
Tools/utilities
-
SpaceNetChallenge utlitiles - Github - Packages intended to assist in the preprocessing of SpaceNet satellite imagery data corpus to a format that is consumable by machine learning algorithms.
-
Sentinelsat - Github - Utility to search and download Copernicus Sentinel satellite images.
Visualisations/existing tools
- Global Forest Watch: An online, global, near-real time forest monitoring tool - Github - Really nice site. Also has an option to overlay Sentinel 2 images from specific dates.
Projects using Sentinel data
- sen2agri - Preparing Sentinel-2 exploitation for agriculture monitoring. 2017 paper/demo: Production of a national crop type map over the Czech Republic using Sentinel-1/2 images
Blogs
-
The DownLinQ - Nice blog from CosmiQ.
-
2016: A review of deep learning models for semantic segmentation
-
2017: http://blog.qure.ai/notes/semantic-segmentation-deep-learning-review