• Stars
    star
    13,941
  • Rank 2,195 (Top 0.05 %)
  • Language
    Python
  • License
    MIT License
  • Created over 6 years ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Fast and flexible image augmentation library. Paper about the library: https://www.mdpi.com/2078-2489/11/2/125

Albumentations

PyPI version CI

Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to increase the quality of trained models. The purpose of image augmentation is to create new training samples from the existing data.

Here is an example of how you can apply some pixel-level augmentations from Albumentations to create new images from the original one: parrot

Why Albumentations

  • Albumentations supports all common computer vision tasks such as classification, semantic segmentation, instance segmentation, object detection, and pose estimation.
  • The library provides a simple unified API to work with all data types: images (RBG-images, grayscale images, multispectral images), segmentation masks, bounding boxes, and keypoints.
  • The library contains more than 70 different augmentations to generate new training samples from the existing data.
  • Albumentations is fast. We benchmark each new release to ensure that augmentations provide maximum speed.
  • It works with popular deep learning frameworks such as PyTorch and TensorFlow. By the way, Albumentations is a part of the PyTorch ecosystem.
  • Written by experts. The authors have experience both working on production computer vision systems and participating in competitive machine learning. Many core team members are Kaggle Masters and Grandmasters.
  • The library is widely used in industry, deep learning research, machine learning competitions, and open source projects.

Table of contents

Authors

Alexander Buslaev β€” Computer Vision Engineer at Mapbox | Kaggle Master

Alex Parinov | Kaggle Master

Vladimir I. Iglovikov β€” Staff Engineer at Lyft Level5 | Kaggle Grandmaster

Evegene Khvedchenya β€” Computer Vision Research Engineer at PiΓ±ata Farms | Kaggle Grandmaster

Mikhail Druzhinin | Kaggle Expert

Installation

Albumentations requires Python 3.7 or higher. To install the latest version from PyPI:

pip install -U albumentations

Other installation options are described in the documentation.

Documentation

The full documentation is available at https://albumentations.ai/docs/.

A simple example

import albumentations as A
import cv2

# Declare an augmentation pipeline
transform = A.Compose([
    A.RandomCrop(width=256, height=256),
    A.HorizontalFlip(p=0.5),
    A.RandomBrightnessContrast(p=0.2),
])

# Read an image with OpenCV and convert it to the RGB colorspace
image = cv2.imread("image.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Augment an image
transformed = transform(image=image)
transformed_image = transformed["image"]

Getting started

I am new to image augmentation

Please start with the introduction articles about why image augmentation is important and how it helps to build better models.

I want to use Albumentations for the specific task such as classification or segmentation

If you want to use Albumentations for a specific task such as classification, segmentation, or object detection, refer to the set of articles that has an in-depth description of this task. We also have a list of examples on applying Albumentations for different use cases.

I want to know how to use Albumentations with deep learning frameworks

We have examples of using Albumentations along with PyTorch and TensorFlow.

I want to explore augmentations and see Albumentations in action

Check the online demo of the library. With it, you can apply augmentations to different images and see the result. Also, we have a list of all available augmentations and their targets.

Who is using Albumentations

See also:

List of augmentations

Pixel-level transforms

Pixel-level transforms will change just an input image and will leave any additional targets such as masks, bounding boxes, and keypoints unchanged. The list of pixel-level transforms:

Spatial-level transforms

Spatial-level transforms will simultaneously change both an input image as well as additional targets such as masks, bounding boxes, and keypoints. The following table shows which additional targets are supported by each transform.

Transform Image Masks BBoxes Keypoints
Affine βœ“ βœ“ βœ“ βœ“
BBoxSafeRandomCrop βœ“ βœ“ βœ“
CenterCrop βœ“ βœ“ βœ“ βœ“
CoarseDropout βœ“ βœ“ βœ“
Crop βœ“ βœ“ βœ“ βœ“
CropAndPad βœ“ βœ“ βœ“ βœ“
CropNonEmptyMaskIfExists βœ“ βœ“ βœ“ βœ“
ElasticTransform βœ“ βœ“ βœ“
Flip βœ“ βœ“ βœ“ βœ“
GridDistortion βœ“ βœ“ βœ“
GridDropout βœ“ βœ“
HorizontalFlip βœ“ βœ“ βœ“ βœ“
Lambda βœ“ βœ“ βœ“ βœ“
LongestMaxSize βœ“ βœ“ βœ“ βœ“
MaskDropout βœ“ βœ“
NoOp βœ“ βœ“ βœ“ βœ“
OpticalDistortion βœ“ βœ“ βœ“
PadIfNeeded βœ“ βœ“ βœ“ βœ“
Perspective βœ“ βœ“ βœ“ βœ“
PiecewiseAffine βœ“ βœ“ βœ“ βœ“
PixelDropout βœ“ βœ“ βœ“ βœ“
RandomCrop βœ“ βœ“ βœ“ βœ“
RandomCropFromBorders βœ“ βœ“ βœ“ βœ“
RandomCropNearBBox βœ“ βœ“ βœ“ βœ“
RandomGridShuffle βœ“ βœ“ βœ“
RandomResizedCrop βœ“ βœ“ βœ“ βœ“
RandomRotate90 βœ“ βœ“ βœ“ βœ“
RandomScale βœ“ βœ“ βœ“ βœ“
RandomSizedBBoxSafeCrop βœ“ βœ“ βœ“
RandomSizedCrop βœ“ βœ“ βœ“ βœ“
Resize βœ“ βœ“ βœ“ βœ“
Rotate βœ“ βœ“ βœ“ βœ“
SafeRotate βœ“ βœ“ βœ“ βœ“
ShiftScaleRotate βœ“ βœ“ βœ“ βœ“
SmallestMaxSize βœ“ βœ“ βœ“ βœ“
Transpose βœ“ βœ“ βœ“ βœ“
VerticalFlip βœ“ βœ“ βœ“ βœ“

A few more examples of augmentations

Semantic segmentation on the Inria dataset

inria

Medical imaging

medical

Object detection and semantic segmentation on the Mapillary Vistas dataset

vistas

Keypoints augmentation

Benchmarking results

To run the benchmark yourself, follow the instructions in benchmark/README.md

Results for running the benchmark on the first 2000 images from the ImageNet validation set using an Intel(R) Xeon(R) Gold 6140 CPU. All outputs are converted to a contiguous NumPy array with the np.uint8 data type. The table shows how many images per second can be processed on a single core; higher is better.

albumentations
1.1.0
imgaug
0.4.0
torchvision (Pillow-SIMD backend)
0.10.1
keras
2.6.0
augmentor
0.2.8
solt
0.1.9
HorizontalFlip 10220 2702 2517 876 2528 6798
VerticalFlip 4438 2141 2151 4381 2155 3659
Rotate 389 283 165 28 60 367
ShiftScaleRotate 669 425 146 29 - -
Brightness 2765 1124 411 229 408 2335
Contrast 2767 1137 349 - 346 2341
BrightnessContrast 2746 629 190 - 189 1196
ShiftRGB 2758 1093 - 360 - -
ShiftHSV 598 259 59 - - 144
Gamma 2849 - 388 - - 933
Grayscale 5219 393 723 - 1082 1309
RandomCrop64 163550 2562 50159 - 42842 22260
PadToSize512 3609 - 602 - - 3097
Resize512 1049 611 1066 - 1041 1017
RandomSizedCrop_64_512 3224 858 1660 - 1598 2675
Posterize 2789 - - - - -
Solarize 2761 - - - - -
Equalize 647 385 - - 765 -
Multiply 2659 1129 - - - -
MultiplyElementwise 111 200 - - - -
ColorJitter 351 78 57 - - -

Python and library versions: Python 3.9.5 (default, Jun 23 2021, 15:01:51) [GCC 8.3.0], numpy 1.19.5, pillow-simd 7.0.0.post3, opencv-python 4.5.3.56, scikit-image 0.18.3, scipy 1.7.1.

Contributing

To create a pull request to the repository, follow the documentation at https://albumentations.ai/docs/contributing/

Comments

In some systems, in the multiple GPU regime, PyTorch may deadlock the DataLoader if OpenCV was compiled with OpenCL optimizations. Adding the following two lines before the library import may help. For more details pytorch/pytorch#1355

cv2.setNumThreads(0)
cv2.ocl.setUseOpenCL(False)

Citing

If you find this library useful for your research, please consider citing Albumentations: Fast and Flexible Image Augmentations:

@Article{info11020125,
    AUTHOR = {Buslaev, Alexander and Iglovikov, Vladimir I. and Khvedchenya, Eugene and Parinov, Alex and Druzhinin, Mikhail and Kalinin, Alexandr A.},
    TITLE = {Albumentations: Fast and Flexible Image Augmentations},
    JOURNAL = {Information},
    VOLUME = {11},
    YEAR = {2020},
    NUMBER = {2},
    ARTICLE-NUMBER = {125},
    URL = {https://www.mdpi.com/2078-2489/11/2/125},
    ISSN = {2078-2489},
    DOI = {10.3390/info11020125}
}