• Stars
    star
    402
  • Rank 103,284 (Top 3 %)
  • Language
    Groovy
  • License
    MIT License
  • Created over 8 years ago
  • Updated 18 days ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Simplifies usage of Docker Compose for integration testing in Gradle environment.

gradle-docker-compose-plugin Build Version

Simplifies usage of Docker Compose for local development and integration testing in Gradle environment.

composeUp task starts the application and waits till all containers become healthy and all exposed TCP ports are open (so till the application is ready). It reads assigned host and ports of particular containers and stores them into dockerCompose.servicesInfos property.

composeDown task stops the application and removes the containers, only if 'stopContainers' is set to 'true' (default value).

composeDownForced task stops the application and removes the containers.

composePull task pulls and optionally builds the images required by the application. This is useful, for example, with a CI platform that caches docker images to decrease build times.

composeBuild task builds the services of the application.

composePush task pushes images for services to their respective registry/repository.

composeLogs task stores logs from all containers to files in containerLogToDir directory.

Quick start

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath "com.avast.gradle:gradle-docker-compose-plugin:$versionHere"
    }
}

apply plugin: 'docker-compose'

// Or use the new Gradle Portal plugins (then you don't have to add the dependency as above):
// plugins {
//  id 'com.avast.gradle.docker-compose' version "$versionHere"
// }

dockerCompose.isRequiredBy(test)
  • docker-compose up is executed in the project directory, so it uses the docker-compose.yml file.
  • If the provided task (test in the example above) executes a new process then environment variables and Java system properties are provided.
    • The name of environment variable is ${serviceName}_HOST and ${serviceName}_TCP_${exposedPort} (e.g. WEB_HOST and WEB_TCP_80).
    • The name of Java system property is ${serviceName}.host and ${serviceName}.tcp.${exposedPort} (e.g. web.host and web.tcp.80).
    • If the service is scaled then the serviceName has _1, _2... suffix (e.g. WEB_1_HOST and WEB_1_TCP_80, web_1.host and web_1.tcp.80).
      • Please note that in Docker Compose v2, the suffix contains - instead of _

Why to use Docker Compose?

  1. I want to be able to run my application on my computer, and it must work for my colleagues as well. Just execute docker-compose up and I'm done - e.g. the database is running.
  2. I want to be able to test my application on my computer - I don't wanna wait till my application is deployed into dev/testing environment and acceptance/end2end tests get executed. I want to execute these tests on my computer - it means execute docker-compose up before these tests.

Why this plugin?

You could easily ensure that docker-compose up is called before your tests but there are few gotchas that this plugin solves:

  1. If you execute docker-compose up -d (detached) then this command returns immediately and your application is probably not able to serve requests at this time. This plugin waits till all containers become healthy and all exported TCP ports of all services are open.
    • If waiting for healthy state or open TCP ports timeouts (default is 15 minutes) then it prints log of related service.
  2. It's recommended not to assign fixed values of exposed ports in docker-compose.yml (i.e. 8888:80) because it can cause ports collision on integration servers. If you don't assign a fixed value for exposed port (use just 80) then the port is exposed as a random free port. This plugin reads assigned ports (and even IP addresses of containers) and stores them into dockerCompose.servicesInfo map.
  3. There are minor differences when using Linux containers on Linux, Windows and Mac, and when using Windows Containers. This plugin handles these differences for you so you have the same experience in all environments.

Usage

The plugin must be applied on project that contains docker-compose.yml file. It supposes that Docker Engine and Docker Compose are installed and available in PATH.

Starting from plugin version 0.10.0, Gradle 4.9 or newer is required (because it uses Task Configuration Avoidance API).

buildscript {
    repositories {
        mavenCentral()
    }
    dependencies {
        classpath "com.avast.gradle:gradle-docker-compose-plugin:$versionHere"
    }
}

apply plugin: 'docker-compose'

dockerCompose.isRequiredBy(test) // hooks 'dependsOn composeUp' and 'finalizedBy composeDown', and exposes environment variables and system properties (if possible)

dockerCompose {
    useComposeFiles = ['docker-compose.yml', 'docker-compose.prod.yml'] // like 'docker-compose -f <file>'; default is empty
    startedServices = ['web'] // list of services to execute when calling 'docker-compose up' or 'docker-compose pull' (when not specified, all services are executed)
    scale = [${serviceName1}: 5, ${serviceName2}: 2] // Pass docker compose --scale option like 'docker-compose up --scale serviceName1=5 --scale serviceName2=2'
    forceRecreate = false // pass '--force-recreate' and '--renew-anon-volumes' when calling 'docker-compose up' when set to 'true`
    noRecreate = false // pass '--no-recreate' when calling 'docker-compose up' when set to 'true`
    buildBeforeUp = true // performs 'docker-compose build' before calling the 'up' command; default is true
    buildBeforePull = true // performs 'docker-compose build' before calling the 'pull' command; default is true
    ignorePullFailure = false // when set to true, pass '--ignore-pull-failure' to 'docker-compose pull'
    ignorePushFailure = false // when set to true, pass '--ignore-push-failure' to 'docker-compose push'
    pushServices = [] // which services should be pushed, if not defined then upon `composePush` task all defined services in compose file will be pushed (default behaviour)
    buildAdditionalArgs = ['--force-rm']
    pullAdditionalArgs = ['--ignore-pull-failures']
    upAdditionalArgs = ['--no-deps']
    downAdditionalArgs = ['--some-switch']
    composeAdditionalArgs = ['--context', 'remote', '--verbose', "--log-level", "DEBUG"] // for adding more [options] in docker-compose [-f <arg>...] [options] [COMMAND] [ARGS...]

    waitForTcpPorts = true // turns on/off the waiting for exposed TCP ports opening; default is true
    waitForTcpPortsTimeout = java.time.Duration.ofMinutes(15) // how long to wait until all exposed TCP become open; default is 15 minutes
    waitAfterTcpProbeFailure = java.time.Duration.ofSeconds(1) // how long to sleep before next attempt to check if a TCP is open; default is 1 second
    tcpPortsToIgnoreWhenWaiting = [1234] // list of TCP ports what will be ignored when waiting for exposed TCP ports opening; default: empty list
    waitForHealthyStateTimeout = java.time.Duration.ofMinutes(15) // how long to wait until a container becomes healthy; default is 15 minutes
    waitAfterHealthyStateProbeFailure = java.time.Duration.ofSeconds(5) // how long to sleep before next attempt to check healthy status; default is 5 seconds
    checkContainersRunning = true // turns on/off checking if container is running or restarting (during waiting for open TCP port and healthy state); default is true

    captureContainersOutput = false // if true, prints output of all containers to Gradle output - very useful for debugging; default is false
    captureContainersOutputToFile = project.file('/path/to/logFile') // sends output of all containers to a log file
    captureContainersOutputToFiles = project.file('/path/to/directory') // sends output of all services to a dedicated log file in the directory specified, e.g. 'web.log' for service named 'log'
    composeLogToFile = project.file('build/my-logs.txt') // redirect output of composeUp and composeDown tasks to this file; default is null (ouput is not redirected)
    containerLogToDir = project.file('build/logs') // directory where composeLogs task stores output of the containers; default: build/containers-logs
    includeDependencies = false // calculates services dependencies of startedServices and includes those when gathering logs or removing containers; default is false

    stopContainers = true // doesn't call `docker-compose down` if set to false - see below the paragraph about reconnecting; default is true
    removeContainers = true // default is true
    retainContainersOnStartupFailure = false // if set to true, skips running ComposeDownForced task when ComposeUp fails - useful for troubleshooting; default is false
    removeImages = com.avast.gradle.dockercompose.RemoveImages.None // Other accepted values are All and Local
    removeVolumes = true // default is true
    removeOrphans = false // removes containers for services not defined in the Compose file; default is false
    
    projectName = 'my-project' // allow to set custom docker-compose project name (defaults to a stable name derived from absolute path of the project and nested settings name), set to null to Docker Compose default (directory name)
    projectNamePrefix = 'my_prefix_' // allow to set custom prefix of docker-compose project name, the final project name has nested configuration name appended
    executable = '/path/to/docker-compose' // allow to set the path of the docker-compose executable (useful if not present in PATH)
    dockerExecutable = '/path/to/docker' // allow to set the path of the docker executable (useful if not present in PATH)
    dockerComposeWorkingDirectory = project.file('/path/where/docker-compose/is/invoked/from')
    dockerComposeStopTimeout = java.time.Duration.ofSeconds(20) // time before docker-compose sends SIGTERM to the running containers after the composeDown task has been started
    environment.put 'BACKEND_ADDRESS', '192.168.1.100' // environment variables to be used when calling 'docker-compose', e.g. for substitution in compose file
}

test.doFirst {
    // exposes "${serviceName}_HOST" and "${serviceName}_TCP_${exposedPort}" environment variables
    // for example exposes "WEB_HOST" and "WEB_TCP_80" environment variables for service named `web` with exposed port `80`
    // if service is scaled using scale option, environment variables will be exposed for each service instance like "WEB_1_HOST", "WEB_1_TCP_80", "WEB_2_HOST", "WEB_2_TCP_80" and so on
    dockerCompose.exposeAsEnvironment(test)
    // exposes "${serviceName}.host" and "${serviceName}.tcp.${exposedPort}" system properties
    // for example exposes "web.host" and "web.tcp.80" system properties for service named `web` with exposed port `80`
    // if service is scaled using scale option, environment variables will be exposed for each service instance like "web_1.host", "web_1.tcp.80", "web_2.host", "web_2.tcp.80" and so on
    dockerCompose.exposeAsSystemProperties(test)
    // get information about container of service `web` (declared in docker-compose.yml)
    def webInfo = dockerCompose.servicesInfos.web.firstContainer
    // in case scale option is used, dockerCompose.servicesInfos.containerInfos will contain information about all running containers of service. Particular container can be retrieved either by iterating the values of containerInfos map (key is service instance name, for example 'web_1')
    def webInfo = dockerCompose.servicesInfos.web.'web_1'
    // pass host and exposed TCP port 80 as custom-named Java System properties
    systemProperty 'myweb.host', webInfo.host
    systemProperty 'myweb.port', webInfo.ports[80]
    // it's possible to read information about exposed UDP ports using webInfo.updPorts[1234]
}

Nested configurations

It is possible to create a new set of ComposeUp/ComposeBuild/ComposePull/ComposeDown/ComposeDownForced/ComposePush tasks using following syntax:

Groovy
dockerCompose {
    // settings as usual
    myNested {
        useComposeFiles = ['docker-compose-for-integration-tests.yml']
        isRequiredBy(project.tasks.myTask)
    }
}
  • It creates myNestedComposeUp, myNestedComposeBuild, myNestedComposePull, myNestedComposeDown, myNestedComposeDownForced and myNestedComposePush tasks.
  • It's possible to use all the settings as in the main dockerCompose block.
  • Configuration of the nested settings defaults to the main dockerCompose settings (declared before the nested settings), except following properties: projectName, startedServices, useComposeFiles, scale, captureContainersOutputToFile, captureContainersOutputToFiles, composeLogToFile, containerLogToDir, pushServices

When exposing service info from myNestedComposeUp task into your task you should use following syntax:

test.doFirst {
    dockerCompose.myNested.exposeAsEnvironment(test)
}
Kotlin
test.doFirst {
    dockerCompose.nested("myNested").exposeAsEnvironment(project.tasks.named("test").get())
}

It's also possible to use this simplified syntax:

dockerCompose {
    isRequiredByMyTask 'docker-compose-for-integration-tests.yml'
}

Reconnecting

If you specify stopContainers to be false then the plugin automatically tries to reconnect to the containers from the previous run instead of calling docker-compose up again. Thanks to this, the startup can be very fast.

It's very handy in scenarios when you iterate quickly and e.g. don't want to wait for Postgres to start again and again.

Because you don't want to check-in this change to your VCS, you can take advantage of this init.gradle initialization script (in short, copy this file to your USER_HOME/.gradle/ directory).

Usage from Kotlin DSL

This plugin can be used also from Kotlin DSL, see the example:

import com.avast.gradle.dockercompose.ComposeExtension
apply(plugin = "docker-compose")
configure<ComposeExtension> {
    includeDependencies.set(true)
    createNested("local").apply {
        setProjectName("foo")
        environment.putAll(mapOf("TAGS" to "feature-test,local"))
        startedServices.set(listOf("foo-api", "foo-integration"))
        upAdditionalArgs.set(listOf("--no-deps"))
    }
}

Tips

  • You can call dockerCompose.isRequiredBy(anyTask) for any task, for example for your custom integrationTest task.
  • If some Dockerfile needs an artifact generated by Gradle then you can declare this dependency in a standard way, like composeUp.dependsOn project(':my-app').distTar
  • All properties in dockerCompose have meaningful default values so you don't have to touch it. If you are interested then you can look at ComposeSettings.groovy for reference.
  • dockerCompose.servicesInfos contains information about running containers so you must access this property after composeUp task is finished. So doFirst of your test task is perfect place where to access it.
  • Plugin honours a docker-compose.override.yml file, but only when no files are specified with useComposeFiles (conform command-line behavior).
  • Check ContainerInfo.groovy to see what you can know about running containers.
  • You can determine the Docker host in your Gradle build (i.e. docker-machine start) and set the DOCKER_HOST environment variable for compose to use: dockerCompose { environment.put 'DOCKER_HOST', '192.168.64.9' }
  • If the services executed by docker-compose are running on a specific host (different than Docker, like in CirceCI 2.0), then SERVICES_HOST environment variable can be used. This value will be used as the hostname where the services are expected to be listening.
  • If you need to troubleshoot a failing ComposeUp task, set retainContainersOnStartupFailure to prevent containers from begin forcibly deleted. Does not override removeContainers, so if you run ComposeDown, it will not be affected.

More Repositories

1

retdec

RetDec is a retargetable machine-code decompiler based on LLVM.
C++
7,718
star
2

android-butterknife-zelezny

Android Studio plug-in for generating ButterKnife injections from selected layout XML.
Java
3,385
star
3

retry-go

Simple golang library for retry mechanism
Go
2,170
star
4

android-styled-dialogs

Backport of Material dialogs with easy-to-use API based on DialogFragment
Java
2,153
star
5

retdec-idaplugin

RetDec plugin for IDA
C++
736
star
6

pytest-docker

Docker-based integration tests
Python
386
star
7

ioc

Threat Intel IoCs + bits and pieces of dark matter
C
338
star
8

scala-server-toolkit

Functional programming toolkit for building server applications in Scala.
Scala
194
star
9

hdfs-shell

HDFS Shell is a HDFS manipulation tool to work with functions integrated in Hadoop DFS
Java
151
star
10

yaramod

Parsing of YARA rules into AST and building new rulesets in C++.
C++
113
star
11

apkparser

APK manifest & resources parsing in Golang.
Go
109
star
12

topee

Google Chrome Extension API for Safari
JavaScript
103
star
13

yari

YARI is an interactive debugger for YARA Language.
Rust
84
star
14

apkverifier

APK Signature verification in Go. Supports scheme v1, v2 and v3 and passes Google apksig's testing suite.
Go
76
star
15

gradle-dependencies-viewer

A simple web UI to analyze dependencies for your project based on the text data generated from "gradle dependencies" command.
JavaScript
76
star
16

yls

YARA Language Server
Python
63
star
17

yarang

Alternative YARA scanning engine
C++
62
star
18

pelib

PE file manipulation library.
C++
61
star
19

datadog4s

Making great monitoring easy in functional Scala
Scala
60
star
20

pe_tools

A cross-platform Python toolkit for parsing/writing PE files.
Python
60
star
21

k8s-admission-webhook

A general-purpose Kubernetes admission webhook to aid with enforcing best practices within your cluster.
Go
54
star
22

yaracpp

C++ wrapper for YARA.
C++
45
star
23

grpc-java-jwt

JWT based authentication for gRPC-Java.
Java
44
star
24

hexrays-demo

IDA SDK tech demo
C++
34
star
25

rabbitmq-scala-client

Scala wrapper over standard RabbitMQ Java client library
Scala
32
star
26

marathon-vault-plugin

Marathon plugin which injects Vault secrets via environment variables
Scala
30
star
27

android-lectures

Class material for lectures about Android development
Kotlin
24
star
28

retdec-regression-tests-framework

A framework for writing and running regression tests for RetDec and related tools.
Python
23
star
29

capstone-dumper

Utility for dumping all the information Capstone has on given instructions.
C++
23
star
30

libdwarf

Library to provide access to DWARF debugging information.
C
22
star
31

PurpleDome

Simulation environment for attacks on computer networks
Python
20
star
32

avast-ctu-cape-dataset

Jupyter Notebook
19
star
33

llvm

An LLVM clone modified for use in RetDec and associated tools.
LLVM
18
star
34

wanna-ml

Complete MLOps framework for Vertex-AI
Python
17
star
35

authenticode-parser

Authenticode-parser is a simple C library for Authenticode format parsing using OpenSSL.
C
15
star
36

grpc-json-bridge

Library for exposing gRPC endpoints via HTTP (JSON) API
Scala
15
star
37

elfio

Library for reading and generating ELF files.
C++
14
star
38

vuei18n-po

transform gettext .po files for vue-i18n
JavaScript
14
star
39

ep-stats

Statistics for Experimentation Platform
Python
13
star
40

retdec-regression-tests

A collection of regression tests for RetDec and associated tools.
Python
11
star
41

cactus

Library for easy conversion between GPB and Scala case classes.
Scala
9
star
42

safariextz

Safari extension packer for node.js
JavaScript
9
star
43

bytes

Library providing universal interface for having an immutable representation of sequence of bytes.
Java
8
star
44

hermes

SMTP honeypot built on top of the Salmon mail server
Python
8
star
45

kafka-tests

Integration test of Apache Kafka 0.9.0+ and Java clients.
Java
8
star
46

ctf-aca-brno-2020

Tasks from Avast Cyber Adventure 2020 Brno
Objective-C
6
star
47

Stor

HTTP API for SHA256 objects
Perl
5
star
48

clockwork

An adoption of the map-reduce paradigm based on the concept of coroutines to the world of stream data processing.
Java
5
star
49

covid-19-ioc

HTML
5
star
50

tlshc

TLSH library in C
C
5
star
51

decryptor-keys

Decryption keys for our ransomware decryptors
5
star
52

bytecompressor

Java and Scala abstractions for some compression algorithms.
Java
5
star
53

slog4s

Structured and contextual logging for Scala
Scala
5
star
54

retdec-support

Support packages for the RetDec decompiler.
5
star
55

hackcambridge-ccleaner-app

A custom build of CCleaner that enables the integration of Avast Secure Browser
Visual Basic
5
star
56

hackcambridge-ccleaner-extension

A stub for the CCleaner extension for Avast Secure Browser
JavaScript
5
star
57

metrics

Java/Scala library defining API for metrics publishing
Java
4
star
58

asio-mutex

Awaitable Mutex compatible with Boost.Asio
C++
4
star
59

machine-learning-python

Machine learning in Python Workshop
Jupyter Notebook
4
star
60

scala-hashes

Case-classes representing MD5, SHA1 and SHA256.
Scala
4
star
61

syringe

Syringe - Dependency Injection and Configuration Library from AVAST Software
Java
4
star
62

mongodb-oplog-stats

A tool for obtaining statistics about a MongoDB replica-set oplog
Rust
4
star
63

syringe-maven-plugin

Supporting Maven plugin for Syringe
Java
3
star
64

cargo-depdiff

Inspecting what changed around dependencies between versions
Rust
3
star
65

webtrails

Svelte
3
star
66

labmanager-unit-vsphere

REST service for vmWare vSphere virtual machine control
Python
3
star
67

BigMap

Scala Map that uses binary search in memory mapped sorted file. It makes possible usage of data sets bigger than available memory as a Map.
Scala
3
star
68

management-console-config

Sample configuration for Avast Business management console
2
star
69

boost-python-examples

Examples that show capabilities of Boost Python
C++
2
star
70

ndisdump

A no-dependencies network packet capture tool for Windows
C++
2
star
71

docker-centos_perl_cpanm

2
star
72

adblock

JavaScript
2
star
73

stor-client

Go
2
star
74

retdec-build-system-tests

Tests of RetDec build system. This can also serve as RetDec component usage examples.
C++
2
star
75

eslint-plugin-apklab-frida

ESLint plugin & config for the Frida scripts used in the apklab.io platform.
JavaScript
2
star
76

VSArchConv

Converts .sln/.vcxproj to support different architecture
C++
2
star
77

hackcambridge-challenge

Integrate the Avast Secure Browser (ASB) and CCleaner products to improve user privacy, prevent website tracking, and reduce the userโ€™s online footprint.
2
star
78

stepdance

Functional iterators for easy and elegant parsing, scanning, iterating etc. Written Scala.
Scala
1
star
79

docker-flume-hdfs

Shell
1
star
80

storage-client

Scala
1
star
81

vsphere-instaclone

Really quickly clone machines to be used as TeamCity agents
Kotlin
1
star
82

jmx-publisher

Tool to get properties and methods published via JMX easily.
Java
1
star
83

browser-extension-messaging-sample

JavaScript
1
star
84

instaprofiles-sync

application is used to regularly synchronize defined cloud profiles for [TeamCity plugin vsphere-instaclone](https://github.com/avast/vsphere-instaclone)
Java
1
star
85

continuity

Library for passing context between threads in multi-threaded applications
Scala
1
star
86

firefox-xpi

Firefox extension packer for node.js
JavaScript
1
star
87

jasmine-class-mock

Create a mock class for the Jasmine framework
JavaScript
1
star
88

jfrog-verisign

JFrog plugin to verify deploying artifacts signatures. It supports both JAR and RPM (PGP) verification
Java
1
star
89

https-encryption

Avast HTTPS Encryption powered by HTTPSEverywhere
JavaScript
1
star
90

kluzo

Library for passing tracing ID between threads in multi-threaded applications
Scala
1
star
91

genrex

Generator of regular expressions
Python
1
star
92

fairy-tale

Toolbox for functional programming in Scala using Finally Tagless approach
Scala
1
star
93

ResolveTest

Simple dns resolve utility.
C++
1
star
94

gossip-bot

Find out what is happening within the company
Go
1
star