- S3Mock
- Changelog
- Supported S3 operations
- File System Structure
- Usage
- Build & Run
- Contributing
- Licensing
S3Mock
S3Mock
is a lightweight server that implements parts of the Amazon S3 API.
It has been created to support local integration testing by reducing infrastructure dependencies.
The S3Mock
server can be started as a standalone Docker container, using Testcontainers, JUnit4, JUnit5 and TestNG support, or programmatically.
Changelog
See GitHub releases.
See also the changelog for detailed information about changes in releases and changes planned for future releases.
Supported S3 operations
Of these operations of the Amazon S3 API, all marked
File System Structure
S3Mock stores Buckets, Objects, Parts and other data on disk.
This lets users inspect the stored data while the S3Mock is running.
If the config property retainFilesOnExit
is set to true
, this data will not be deleted when S3Mock is shut down.
While it may be possible to start S3Mock on a root folder from a previous run and have all data available through the S3 API, the structure and contents of the files are not considered Public API, and are subject to change in later releases. |
Also, there are no automated test cases for this behaviour. |
Root-Folder
S3Mock stores buckets and objects a root-folder.
This folder is expected to be empty when S3Mock starts. See also FYI above.
/<root-folder>/
Buckets
Buckets are stored as a folder with their name as created through the S3 API directly below the root:
/<root-folder>/<bucket-name>/
BucketMetadata is stored in a file in the bucket directory, serialized as JSON.
BucketMetadata contains the "key" -> "uuid" dictionary for all objects uploaded to this bucket among other data.
/<root-folder>/<bucket-name>/bucketMetadata.json
Objects
Objects are stored in folders below the bucket they were created in. A folder is created that uses the Object's UUID assigned in the BucketMetadata as a name.
/<root-folder>/<bucket-name>/<uuid>/
Object data is stored below that UUID folder.
Binary data is always stored in a file binaryData
/<root-folder>/<bucket-name>/<uuid>/binaryData
Object metadata is serialized as JSON and stored as objectMetadata.json
/<root-folder>/<bucket-name>/<uuid>/objectMetadata.json
Object ACL is serialized as XML and stored as objectAcl.xml
/<root-folder>/<bucket-name>/<uuid>/objectAcl.xml
Multipart Uploads
Multipart Uploads are created in a bucket using object keys and an uploadId.
The object is assigned a UUID within the bucket (stored in BucketMetadata).
The Multipart upload metadata is currently not stored on disk.
The parts folder is created below the object UUID folder named with the uploadId
:
/<root-folder>/<bucket-name>/<uuid>/<uploadId>/
Each part is stored in the parts folder with the partNo
as name and .part
as a suffix.
/<root-folder>/<bucket-name>/<uuid>/<uploadId>/<partNo>.part
Usage
Configuration
The mock can be configured with the following environment variables:
validKmsKeys
: list of KMS Key-Refs that are to be treated as valid.- KMS keys must be configured as valid ARNs in the format of "
arn:aws:kms:region:acct-id:key/key-id
", for example "arn:aws:kms:us-east-1:1234567890:key/valid-test-key-id
" - The list must be comma separated keys like
arn-1, arn-2
- When requesting with KMS encryption, the key ID is passed to the SDK / CLI, in the example above this would be "
valid-test-key-id
". - S3Mock does not implement KMS encryption, if a key ID is passed in a request, S3Mock will just validate if a given Key was configured during startup and reject the request if the given Key was not configured.
- KMS keys must be configured as valid ARNs in the format of "
initialBuckets
: list of names for buckets that will be available initially.- The list must be comma separated names like
bucketa, bucketb
- The list must be comma separated names like
root
: the base directory to place the temporary files exposed by the mock.debug
: set totrue
to enable Spring Boot's debug output.trace
: set totrue
to enable Spring Boot's trace output.retainFilesOnExit
: set totrue
to let S3Mock keep all files that were created during its lifetime. Default isfalse
, all files are removed if S3Mock shuts down.
S3Mock Docker
The S3Mock
Docker container is the recommended way to use S3Mock
.
It is released to Docker Hub.
The container is lightweight, built on top of the official Linux Alpine image.
If needed, configure memory and cpu limits for the S3Mock docker container.
The JVM will automatically use half the available memory.
Start using the command-line
Starting on the command-line:
docker run -p 9090:9090 -p 9191:9191 -t adobe/s3mock
The port 9090
is for HTTP, port 9191
is for HTTPS.
Example with configuration via environment variables:
docker run -p 9090:9090 -p 9191:9191 -e initialBuckets=test -e debug=true -t adobe/s3mock
Start using the Fabric8 Docker-Maven-Plugin
Our integration tests are using the Amazon S3 Client to verify the server functionality against the S3Mock. During the Maven build, the Docker image is started using the docker-maven-plugin and the corresponding ports are passed to the JUnit test through the maven-failsafe-plugin
. See BucketV2IT
as an example on how it's used in the code.
This way, one can easily switch between calling the S3Mock or the real S3 endpoint and this doesn't add any additional Java dependencies to the project.
Start using Testcontainers
The S3MockContainer
is a Testcontainer
implementation that comes pre-configured exposing HTTP and HTTPS ports. Environment variables can be set on startup.
The example S3MockContainerJupiterTest
demonstrates the usage with JUnit 5. The example S3MockContainerManualTest
demonstrates the usage with plain Java.
Testcontainers provides integrations for JUnit 4, JUnit 5 and Spock.
For more information, visit the Testcontainers website.
To use the S3MockContainer
, use the following Maven artifact in test
scope:
<dependency>
<groupId>com.adobe.testing</groupId>
<artifactId>s3mock-testcontainers</artifactId>
<version>...</version>
<scope>test</scope>
</dependency>
S3Mock Java
S3Mock
Java libraries are released and published to the Sonatype Maven Repository and subsequently published to
the official Maven mirrors.
Using the Java libraries is discouraged, see explanation below |
Using the Docker image is encouraged to insulate both S3Mock and your application at runtime. |
S3Mock
is built using Spring Boot, if projects use S3Mock
by adding the dependency to their project and starting
the S3Mock
during a JUnit test, classpaths of the tested application and of the S3Mock
are merged, leading
to unpredictable and undesired effects such as class conflicts or dependency version conflicts.
This is especially problematic if the tested application itself is a Spring (Boot) application, as both applications will load configurations based on availability of certain classes in the classpath, leading to unpredictable runtime behaviour.
This is the opposite of what software engineers are trying to achieve when thoroughly testing code in continuous integration...
S3Mock
dependencies are updated regularly, any update could break any number of projects.
See also issues labelled "dependency-problem".
See also the Java section below
Start using the JUnit4 Rule
The example S3MockRuleTest
demonstrates the usage of the S3MockRule
, which can be configured through a builder.
To use the JUnit4 Rule, use the following Maven artifact in test
scope:
<dependency>
<groupId>com.adobe.testing</groupId>
<artifactId>s3mock-junit4</artifactId>
<version>...</version>
<scope>test</scope>
</dependency>
Start using the JUnit5 Extension
The S3MockExtension
can currently be used in two ways:
-
Declaratively using
@ExtendWith(S3MockExtension.class)
and by injecting a properly configured instance ofAmazonS3
client and/or the startedS3MockApplication
to the tests. See examples:S3MockExtensionDeclarativeTest
(for SDKv1) orS3MockExtensionDeclarativeTest
(for SDKv2) -
Programmatically using
@RegisterExtension
and by creating and configuring theS3MockExtension
using a builder. See examples:S3MockExtensionProgrammaticTest
(for SDKv1) orS3MockExtensionProgrammaticTest
(for SDKv2)
To use the JUnit5 Extension, use the following Maven artifact in test
scope:
<dependency>
<groupId>com.adobe.testing</groupId>
<artifactId>s3mock-junit5</artifactId>
<version>...</version>
<scope>test</scope>
</dependency>
Start using the TestNG Listener
The example S3MockListenerXMLConfigurationTest
demonstrates the usage of the S3MockListener
, which can be configured as shown in testng.xml
. The listener bootstraps S3Mock application before TestNG execution starts and shuts down the application just before the execution terminates. Please refer to IExecutionListener
To use the TestNG Listener, use the following Maven artifact in test
scope:
<dependency>
<groupId>com.adobe.testing</groupId>
<artifactId>s3mock-testng</artifactId>
<version>...</version>
<scope>test</scope>
</dependency>
Start programmatically
Include the following dependency and use one of the start
methods in com.adobe.testing.s3mock.S3MockApplication
:
<dependency>
<groupId>com.adobe.testing</groupId>
<artifactId>s3mock</artifactId>
<version>...</version>
</dependency>
Build & Run
To build this project, you need Docker, JDK 17 or higher, and Maven:
./mvnw clean install
If you want to skip the Docker build, pass the optional parameter "skipDocker":
./mvnw clean install -DskipDocker
You can run the S3Mock from the sources by either of the following methods:
- Run or Debug the class
com.adobe.testing.s3mock.S3MockApplication
in the IDE. - using Docker:
./mvnw clean package -pl server -am -DskipTests
docker run -p 9090:9090 -p 9191:9191 -t adobe/s3mock:latest
- using the Docker Maven plugin:
./mvnw clean package docker:start -pl server -am -DskipTests -Ddocker.follow -Dit.s3mock.port_http=9090 -Dit.s3mock.port_https=9191
(stop withctrl-c
)
Once the application is started, you can execute the *IT
tests from your IDE.
Java
This repo is built with Java 17, output is currently bytecode compatible with Java 17.
Kotlin
The Integration Tests are built in Kotlin.
Contributing
Contributions are welcomed! Read the Contributing Guide for more information.
Licensing
This project is licensed under the Apache V2 License. See LICENSE for more information.