• Stars
    star
    2,059
  • Rank 22,257 (Top 0.5 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 6 years ago
  • Updated about 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Convolutional Neural Networks to predict the aesthetic and technical quality of images.

Image Quality Assessment

Build Status Docs License

This repository provides an implementation of an aesthetic and technical image quality model based on Google's research paper "NIMA: Neural Image Assessment". You can find a quick introduction on their Research Blog.

NIMA consists of two models that aim to predict the aesthetic and technical quality of images, respectively. The models are trained via transfer learning, where ImageNet pre-trained CNNs are used and fine-tuned for the classification task.

For more information on how we used NIMA for our specifc problem, we did a write-up on two blog posts:

The provided code allows to use any of the pre-trained models in Keras. We further provide Docker images for local CPU training and remote GPU training on AWS EC2, as well as pre-trained models on the AVA and TID2013 datasets.

Read the full documentation at: https://idealo.github.io/image-quality-assessment/.

Image quality assessment is compatible with Python 3.6 and is distributed under the Apache 2.0 license. We welcome all kinds of contributions, especially new model architectures and/or hyperparameter combinations that improve the performance of the currently published models (see Contribute).

Trained models

Predictions from aesthetic model
Predictions from technical model

We provide trained models, for both aesthetic and technical classifications, that use MobileNet as the base CNN. The models and their respective config files are stored under models/MobileNet. They achieve the following performance

Model Dataset EMD LCC SRCC
MobileNet aesthetic AVA 0.071 0.626 0.609
MobileNet technical TID2013 0.107 0.652 0.675

Getting started

  1. Install jq

  2. Install Docker

  3. Build docker image docker build -t nima-cpu . -f Dockerfile.cpu

In order to train remotely on AWS EC2

  1. Install Docker Machine

  2. Install AWS Command Line Interface

Predict

In order to run predictions on an image or batch of images you can run the prediction script

  1. Single image file

    ./predict  \
    --docker-image nima-cpu \
    --base-model-name MobileNet \
    --weights-file $(pwd)/models/MobileNet/weights_mobilenet_technical_0.11.hdf5 \
    --image-source $(pwd)/src/tests/test_images/42039.jpg
  2. All image files in a directory

    ./predict  \
    --docker-image nima-cpu \
    --base-model-name MobileNet \
    --weights-file $(pwd)/models/MobileNet/weights_mobilenet_technical_0.11.hdf5 \
    --image-source $(pwd)/src/tests/test_images

Train locally on CPU

  1. Download dataset (see instructions under Datasets)

  2. Run the local training script (e.g. for TID2013 dataset)

    ./train-local \
    --config-file $(pwd)/models/MobileNet/config_technical_cpu.json \
    --samples-file $(pwd)/data/TID2013/tid_labels_train.json \
    --image-dir /path/to/image/dir/local

This will start a training container from the Docker image nima-cpu and create a timestamp train job folder under train_jobs, where the trained model weights and logs will be stored. The --image-dir argument requires the path of the image directory on your local machine.

In order to stop the last launched container run bash CONTAINER_ID=$(docker ps -l -q) docker container stop $CONTAINER_ID

In order to stream logs from last launched container run bash CONTAINER_ID=$(docker ps -l -q) docker logs $CONTAINER_ID --follow

Train remotely on AWS EC2

  1. Configure your AWS CLI. Ensure that your account has limits for GPU instances and read/write access to the S3 bucket specified in config file [link]

    aws configure
  2. Launch EC2 instance with Docker Machine. Choose an Ubuntu AMI based on your region (https://cloud-images.ubuntu.com/locator/ec2/). For example, to launch a p2.xlarge EC2 instance named ec2-p2 run (NB: change region, VPC ID and AMI ID as per your setup)

    docker-machine create --driver amazonec2 \
                          --amazonec2-region eu-west-1 \
                          --amazonec2-ami ami-58d7e821 \
                          --amazonec2-instance-type p2.xlarge \
                          --amazonec2-vpc-id vpc-abc \
                          ec2-p2
  3. ssh into EC2 instance

    docker-machine ssh ec2-p2
  4. Update NVIDIA drivers and install nvidia-docker (see this blog post for more details)

    # update NVIDIA drivers
    sudo add-apt-repository ppa:graphics-drivers/ppa -y
    sudo apt-get update
    sudo apt-get install -y nvidia-375 nvidia-settings nvidia-modprobe
    
    # install nvidia-docker
    wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
    sudo dpkg -i /tmp/nvidia-docker_1.0.1-1_amd64.deb && rm /tmp/nvidia-docker_1.0.1-1_amd64.deb
  5. Download dataset to EC2 instance (see instructions under Datasets). We recommend to save the AMI with the downloaded data for future use.

  6. Run the remote EC2 training script (e.g. for AVA dataset)

    ./train-ec2 \
    --docker-machine ec2-p2 \
    --config-file $(pwd)/models/MobileNet/config_aesthetic_gpu.json \
    --samples-file $(pwd)/data/AVA/ava_labels_train.json \
    --image-dir /path/to/image/dir/remote

The training progress will be streamed to your terminal. After the training has finished, the train outputs (logs and best model weights) will be stored on S3 in a timestamped folder. The S3 output bucket can be specified in the config file. The --image-dir argument requires the path of the image directory on your remote instance.

Contribute

We welcome all kinds of contributions and will publish the performances from new models in the performance table under Trained models.

For example, to train a new aesthetic NIMA model based on InceptionV3 ImageNet weights, you just have to change the base_model_name parameter in the config file models/MobileNet/config_aesthetic_gpu.json to "InceptionV3". You can also control all major hyperparameters in the config file, like learning rate, batch size, or dropout rate.

See the Contribution guide for more details.

Datasets

This project uses two datasets to train the NIMA model:

  1. AVA used for aesthetic ratings (data)
  2. TID2013 used for technical ratings

For training on AWS EC2 we recommend to build a custom AMI with the AVA images stored on it. This has proven much more viable than copying the entire dataset from S3 to the instance for each training job.

Label files

The train script requires JSON label files in the format

[
  {
    "image_id": "231893",
    "label": [2,8,19,36,76,52,16,9,3,2]
  },
  {
    "image_id": "746672",
    "label": [1,2,7,20,38,52,20,11,1,3]
  },
  ...
]

The label for each image is the normalized or un-normalized frequency distribution of ratings 1-10.

For the AVA dataset these frequency distributions are given in the raw data files. For the TID2013 dataset we inferred the normalized frequency distribution, i.e. probability distribution, by finding the maximum entropy distribution that satisfies the mean score. The code to generate the TID2013 labels can be found under data/TID2013/get_labels.py.

For both datasets we provide train and test set label files stored under

data/AVA/ava_labels_train.json
data/AVA/ava_labels_test.json

and

data/TID2013/tid2013_labels_train.json
data/TID2013/tid2013_labels_test.json

For the AVA dataset we randomly assigned 90% of samples to the train set, and 10% to the test set, and throughout training a 5% validation set will be split from the training set to evaluate the training performance after each epoch. For the TID2013 dataset we split the train/test sets by reference images, to ensure that no reference image, and any of its distortions, enters both the train and test set.

Serving NIMA with TensorFlow Serving

TensorFlow versions of both the technical and aesthetic MobileNet models are provided, along with the script to generate them from the original Keras files, under the contrib/tf_serving directory.

There is also an already configured TFS Dockerfile that you can use.

To get predictions from the aesthetic or technical model:

  1. Build the NIMA TFS Docker image docker build -t tfs_nima contrib/tf_serving
  2. Run a NIMA TFS container with docker run -d --name tfs_nima -p 8500:8500 tfs_nima
  3. Install python dependencies to run TF serving sample client
    virtualenv -p python3 contrib/tf_serving/venv_tfs_nima
    source contrib/tf_serving/venv_tfs_nima/bin/activate
    pip install -r contrib/tf_serving/requirements.txt
    
  4. Get predictions from aesthetic or technical model by running the sample client
    python -m contrib.tf_serving.tfs_sample_client --image-path src/tests/test_images/42039.jpg --model-name mobilenet_aesthetic
    python -m contrib.tf_serving.tfs_sample_client --image-path src/tests/test_images/42039.jpg --model-name mobilenet_technical
    

Cite this work

Please cite Image Quality Assessment in your publications if this is useful for your research. Here is an example BibTeX entry:

@misc{idealods2018imagequalityassessment,
  title={Image Quality Assessment},
  author={Christopher Lennan and Hao Nguyen and Dat Tran},
  year={2018},
  howpublished={\url{https://github.com/idealo/image-quality-assessment}},
}

Maintainers

Copyright

See LICENSE for details.

More Repositories

1

imagededup

😎 Finding duplicate images made easy!
Python
5,072
star
2

image-super-resolution

πŸ”Ž Super-scale your images and run experiments with Residual Dense and Adversarial Networks.
Python
4,595
star
3

imageatm

Image classification for everyone.
Python
215
star
4

mongodb-slow-operations-profiler

This java web application collects slow operations from one or multiple mongoDB system(s) in order to visualize and analyze them.
Java
192
star
5

cnn-exposed

πŸ•΅οΈβ€β™‚οΈ Interpreting Convolutional Neural Network (CNN) Results.
Jupyter Notebook
175
star
6

mongodb-performance-test

multithreaded test tool to test mongodb performances, such as throughput and latency
Java
85
star
7

php-rdkafka-ffi

PHP Kafka client - binding librdkafka via FFI
PHP
75
star
8

terraform-aws-opensearch

Terraform module to provision an OpenSearch cluster with SAML authentication.
HCL
67
star
9

nvidia-docker-keras

Workflow that shows how to train neural networks on EC2 instances with GPU support and compares training times to CPUs
Python
60
star
10

falcon-prediction-app

Simple Machine Learning Web API Example with Falcon
Jupyter Notebook
50
star
11

terraform-emr-pyspark

Quickstart PySpark with Anaconda on AWS/EMR using Terraform
HCL
47
star
12

cloudwatch-alarm-to-ms-teams

Send CloudWatch Alarms to Microsoft Teams via an SNS topic.
TypeScript
33
star
13

terraform-aws-mwaa

Terraform module to setup Managed Workflows with Apache Airflow. (Airflow as managed service by AWS)
HCL
32
star
14

php-middleware-stack

Lightweight PHP 7+ middleware stack based on PSR-15 spec
PHP
29
star
15

jenkins-ci

Minimal example to setup a Jenkins-CI pipeline for data science projects on OpenShift in a couple of minutes.
Dockerfile
27
star
16

spring-cloud-stream-binder-sqs

Amazon SQS for Spring Cloud Stream
Java
23
star
17

logback-redis

Logback Redis Appender with Pipeline-Support for maximum throughput
Java
23
star
18

terraform-provider-controltower

Use AWS Control Tower from Terraform
Go
21
star
19

deckard

Easy-to-use Spring Kafka Producers
Java
16
star
20

flask-openshift-example

Simple Flask example using Docker to deploy on OpenShift 3.
Dockerfile
15
star
21

aws-signing-proxy

Golang HTTP Reverse Proxy to transparently sign requests to AWS endpoints
Go
10
star
22

idealo-orders-api-php-sdk

idealo Direktkauf PHP SDK
PHP
9
star
23

spring-cloud-stream-binder-sns

Amazon SNS for Spring Cloud Stream
Java
9
star
24

logstash-logback-http

Logstash Logback HTTP/HTTPS Appender
Java
8
star
25

idealo.design

idealo Design System Catalog hosted on https://idealo.design
JavaScript
6
star
26

spring-endpoint-exporter

A command-line utility that allows you to export all Endpoints of your Spring Boot Application in OpenAPI 3 format by scanning for specific classes in a jar file or on the file system without actually loading them.
Kotlin
6
star
27

aiven-metadata-prometheus-exporter

A prometheus exporter that provides metadata metrics on Aiven's "service" level
Go
5
star
28

setup-aaga-credentials-action

Securely access AWS from GitHub Actions
TypeScript
3
star
29

terraform-provider-csd

Terraform provider for the common domain product
Go
3
star
30

wheelwright

🎑 Automated build repo for Python wheels (based on spaCy's wheelwright repo)
Python
3
star
31

idealo.github.io

Landing page for idealo.
JavaScript
3
star
32

ds-example-project

Simple Python web application using Anaconda as the package manager. It is intended to be used along Jenkins-CI which is deployed on OpenShift.
Python
3
star
33

test-logger

Junit rule to silence logging for specific tests
Java
1
star
34

offerpage-pairing-task

Java
1
star
35

cctray-hub

github actions to cctray proxy
Kotlin
1
star
36

kafka-ex1

Java
1
star
37

spring-endpoint-exporter-action

An action for the Spring Endpoint Exporter that allows you to export all Endpoints of your Spring Boot Application in OpenAPI 3 format by scanning for specific classes in a jar file or on the file system without actually loading them.
Dockerfile
1
star