java-dataloader
This small and simple utility library is a pure Java 8 port of Facebook DataLoader.
It can serve as integral part of your application's data layer to provide a consistent API over various back-ends and reduce message communication overhead through batching and caching.
An important use case for java-dataloader
is improving the efficiency of GraphQL query execution. Graphql fields
are resolved independently and, with a true graph of objects, you may be fetching the same object many times.
A naive implementation of graphql data fetchers can easily lead to the dreaded "n+1" fetch problem.
Most of the code is ported directly from Facebook's reference implementation, with one IMPORTANT adaptation to make it work for Java 8. (more on this below).
Before reading on, be sure to take a short dive into the original documentation provided by Lee Byron (@leebyron) and Nicholas Schrock (@schrockn) from Facebook, the creators of the original data loader.
Table of contents
- Features
- Getting started!
- Examples
- Other information sources
- Contributing
- Acknowledgements
- Licensing
Features
java-dataloader
is a feature-complete port of the Facebook reference implementation with one major difference. These features are:
- Simple, intuitive API, using generics and fluent coding
- Define batch load function with lambda expression
- Schedule a load request in queue for batching
- Add load requests from anywhere in code
- Request returns a
CompleteableFuture<V>
of the requested value - Can create multiple requests at once
- Caches load requests, so data is only fetched once
- Can clear individual cache keys, so data is re-fetched on next batch queue dispatch
- Can prime the cache with key/values, to avoid data being fetched needlessly
- Can configure cache key function with lambda expression to extract cache key from complex data loader key types
- Individual batch futures complete / resolve as batch is processed
- Results are ordered according to insertion order of load requests
- Deals with partial errors when a batch future fails
- Can disable batching and/or caching in configuration
- Can supply your own
CacheMap<K, V>
implementations - Can supply your own
ValueCache<K, V>
implementations - Has very high test coverage
Getting started!
Installing
Gradle users configure the java-dataloader
dependency in build.gradle
:
repositories {
jcenter()
}
dependencies {
compile 'com.graphql-java:java-dataloader: 3.1.0'
}
Building
To build from source use the Gradle wrapper:
./gradlew clean build
Examples
A DataLoader
object requires a BatchLoader
function that is responsible for loading a promise of values given
a list of keys
BatchLoader<Long, User> userBatchLoader = new BatchLoader<Long, User>() {
@Override
public CompletionStage<List<User>> load(List<Long> userIds) {
return CompletableFuture.supplyAsync(() -> {
return userManager.loadUsersById(userIds);
});
}
};
DataLoader<Long, User> userLoader = DataLoaderFactory.newDataLoader(userBatchLoader);
You can then use it to load values which will be CompleteableFuture
promises to values
CompletableFuture<User> load1 = userLoader.load(1L);
or you can use it to compose future computations as follows. The key requirement is that you call
dataloader.dispatch()
or its variant dataloader.dispatchAndJoin()
at some point in order to make the underlying calls happen to the batch loader.
In this version of data loader, this does not happen automatically. More on this in Manual dispatching .
userLoader.load(1L)
.thenAccept(user -> {
System.out.println("user = " + user);
userLoader.load(user.getInvitedByID())
.thenAccept(invitedBy -> {
System.out.println("invitedBy = " + invitedBy);
});
});
userLoader.load(2L)
.thenAccept(user -> {
System.out.println("user = " + user);
userLoader.load(user.getInvitedByID())
.thenAccept(invitedBy -> {
System.out.println("invitedBy = " + invitedBy);
});
});
userLoader.dispatchAndJoin();
As stated on the original Facebook project :
A naive application may have issued four round-trips to a backend for the required information, but with DataLoader this application will make at most two.
DataLoader allows you to decouple unrelated parts of your application without sacrificing the performance of batch data-loading. While the loader presents an API that loads individual values, all concurrent requests will be coalesced and presented to your batch loading function. This allows your application to safely distribute data fetching requirements throughout your application and maintain minimal outgoing data requests.
In the example above, the first call to dispatch will cause the batched user keys (1 and 2) to be fired at the BatchLoader function to load 2 users.
Since each thenAccept
callback made more calls to userLoader
to get the "user they have invited", another 2 user keys are given at the BatchLoader
function for them.
In this case the userLoader.dispatchAndJoin()
is used to make a dispatch call, wait for it (aka join it), see if the data loader has more batched entries, (which is does)
and then it repeats this until the data loader internal queue of keys is empty. At this point we have made 2 batched calls instead of the naive 4 calls we might have made if
we did not "batch" the calls to load data.
Batching requires batched backing APIs
You will notice in our BatchLoader example that the backing service had the ability to get a list of users given a list of user ids in one call.
public CompletionStage<List<User>> load(List<Long> userIds) {
return CompletableFuture.supplyAsync(() -> {
return userManager.loadUsersById(userIds);
});
}
This is important consideration. By using dataloader
you have batched up the requests for N keys in a list of keys that can be
retrieved at one time.
If you don't have batched backing services, then you can't be as efficient as possible as you will have to make N calls for each key.
BatchLoader<Long, User> lessEfficientUserBatchLoader = new BatchLoader<Long, User>() {
@Override
public CompletionStage<List<User>> load(List<Long> userIds) {
return CompletableFuture.supplyAsync(() -> {
//
// notice how it makes N calls to load by single user id out of the batch of N keys
//
return userIds.stream()
.map(id -> userManager.loadUserById(id))
.collect(Collectors.toList());
});
}
};
That said, with key caching turn on (the default), it will still be more efficient using dataloader
than without it.
Calling the batch loader function with call context environment
Often there is a need to call the batch loader function with some sort of call context environment, such as the calling users security credentials or the database connection parameters.
You can do this by implementing a org.dataloader.BatchLoaderContextProvider
and using one of
the batch loading interfaces such as org.dataloader.BatchLoaderWithContext
.
It will be given a org.dataloader.BatchLoaderEnvironment
parameter and it can then ask it
for the context object.
DataLoaderOptions options = DataLoaderOptions.newOptions()
.setBatchLoaderContextProvider(() -> SecurityCtx.getCallingUserCtx());
BatchLoaderWithContext<String, String> batchLoader = new BatchLoaderWithContext<String, String>() {
@Override
public CompletionStage<List<String>> load(List<String> keys, BatchLoaderEnvironment environment) {
SecurityCtx callCtx = environment.getContext();
return callDatabaseForResults(callCtx, keys);
}
};
DataLoader<String, String> loader = DataLoaderFactory.newDataLoader(batchLoader, options);
The batch loading code will now receive this environment object and it can be used to get context perhaps allowing it to connect to other systems.
You can also pass in context objects per load call. This will be captured and passed to the batch loader function.
You can gain access to them as a map by key or as the original list of context objects.
DataLoaderOptions options = DataLoaderOptions.newOptions()
.setBatchLoaderContextProvider(() -> SecurityCtx.getCallingUserCtx());
BatchLoaderWithContext<String, String> batchLoader = new BatchLoaderWithContext<String, String>() {
@Override
public CompletionStage<List<String>> load(List<String> keys, BatchLoaderEnvironment environment) {
SecurityCtx callCtx = environment.getContext();
//
// this is the load context objects in map form by key
// in this case [ keyA : contextForA, keyB : contextForB ]
//
Map<Object, Object> keyContexts = environment.getKeyContexts();
//
// this is load context in list form
//
// in this case [ contextForA, contextForB ]
return callDatabaseForResults(callCtx, keys);
}
};
DataLoader<String, String> loader = DataLoaderFactory.newDataLoader(batchLoader, options);
loader.load("keyA", "contextForA");
loader.load("keyB", "contextForB");
Returning a Map of results from your batch loader
Often there is not a 1:1 mapping of your batch loaded keys to the values returned.
For example, let's assume you want to load users from a database, you could probably use a query that looks like this:
SELECT * FROM User WHERE id IN (keys)
Given say 10 user id keys you might only get 7 results back. This can be more naturally represented in a map than in an ordered list of values from the batch loader function.
You can use org.dataloader.MappedBatchLoader
for this purpose.
When the map is processed by the DataLoader
code, any keys that are missing in the map
will be replaced with null values. The semantic that the number of DataLoader.load
requests
are matched with an equal number of values is kept.
The keys provided MUST be first class keys since they will be used to examine the returned map and create the list of results, with nulls filling in for missing values.
MappedBatchLoaderWithContext<Long, User> mapBatchLoader = new MappedBatchLoaderWithContext<Long, User>() {
@Override
public CompletionStage<Map<Long, User>> load(Set<Long> userIds, BatchLoaderEnvironment environment) {
SecurityCtx callCtx = environment.getContext();
return CompletableFuture.supplyAsync(() -> userManager.loadMapOfUsersByIds(callCtx, userIds));
}
};
DataLoader<Long, User> userLoader = DataLoaderFactory.newMappedDataLoader(mapBatchLoader);
// ...
Error object is not a thing in a type safe Java world
In the reference JS implementation if the batch loader returns an Error
object back from the load()
promise is rejected
with that error. This allows fine grain (per object in the list) sets of error. If I ask for keys A,B,C and B errors out the promise
for B can contain a specific error.
This is not quite as loose in a Java implementation as Java is a type safe language.
A batch loader function is defined as BatchLoader<K, V>
meaning for a key of type K
it returns a value of type V
.
It can't just return some Exception
as an object of type V
. Type safety matters.
However, you can use the Try
data type which can encapsulate a computation that succeeded or returned an exception.
Try<String> tryS = Try.tryCall(() -> {
if (rollDice()) {
return "OK";
} else {
throw new RuntimeException("Bang");
}
});
if (tryS.isSuccess()) {
System.out.println("It work " + tryS.get());
} else {
System.out.println("It failed with exception : " + tryS.getThrowable());
}
DataLoader supports this type, and you can use this form to create a batch loader that returns a list of Try
objects, some of which may have succeeded,
and some of which may have failed. From that data loader can infer the right behavior in terms of the load(x)
promise.
DataLoader<String, User> dataLoader = DataLoaderFactory.newDataLoaderWithTry(new BatchLoader<String, Try<User>>() {
@Override
public CompletionStage<List<Try<User>>> load(List<String> keys) {
return CompletableFuture.supplyAsync(() -> {
List<Try<User>> users = new ArrayList<>();
for (String key : keys) {
Try<User> userTry = loadUser(key);
users.add(userTry);
}
return users;
});
}
});
On the above example if one of the Try
objects represents a failure, then its load()
promise will complete exceptionally and you can
react to that, in a type safe manner.
Caching
DataLoader
has a two tiered caching system in place.
The first cache is represented by the interface org.dataloader.CacheMap
. It will cache CompletableFuture
s by key and hence future load(key)
calls
will be given the same future and hence the same value.
This cache can only work local to the JVM, since its caches CompletableFuture
s which cannot be serialised across a network say.
The second level cache is a value cache represented by the interface org.dataloader.ValueCache
. By default, this is not enabled and is a no-op.
The value cache uses an async API pattern to encapsulate the idea that the value cache could be in a remote place such as REDIS or Memcached.
Custom future caches
The default future cache behind DataLoader
is an in memory HashMap
. There is no expiry on this, and it lives for as long as the data loader
lives.
However, you can create your own custom future cache and supply it to the data loader on construction via the org.dataloader.CacheMap
interface.
MyCustomCache customCache = new MyCustomCache();
DataLoaderOptions options = DataLoaderOptions.newOptions().setCacheMap(customCache);
DataLoaderFactory.newDataLoader(userBatchLoader, options);
You could choose to use one of the fancy cache implementations from Guava or Caffeine and wrap it in a CacheMap
wrapper ready
for data loader. They can do fancy things like time eviction and efficient LRU caching.
As stated above, a custom org.dataloader.CacheMap
is a local cache of CompleteFuture
s to values, not values per se.
If you want to externally cache values then you need to use the org.dataloader.ValueCache
interface.
Custom value caches
The org.dataloader.ValueCache
allows you to use an external cache.
The API of ValueCache
has been designed to be asynchronous because it is expected that the value cache could be outside
your JVM. It uses CompleteableFuture
s to get and set values into cache, which may involve a network call and hence exceptional failures to get
or set values.
The ValueCache
API is batch oriented, if you have a backing cache that can do batch cache fetches (such a REDIS) then you can use the ValueCache.getValues*(
call directly. However, if you don't have such a backing cache, then the default implementation will break apart the batch of cache value into individual requests
to ValueCache.getValue()
for you.
This library does not ship with any implementations of ValueCache
because it does not want to have
production dependencies on external cache libraries, but you can easily write your own.
The tests have an example based on Caffeine.
Disabling caching
In certain uncommon cases, a DataLoader which does not cache may be desirable.
DataLoaderFactory.newDataLoader(userBatchLoader, DataLoaderOptions.newOptions().setCachingEnabled(false));
Calling the above will ensure that every call to .load()
will produce a new promise, and requested keys will not be saved in memory.
However, when the memoization cache is disabled, your batch function will receive an array of keys which may contain duplicates! Each key will
be associated with each call to .load()
. Your batch loader MUST provide a value for each instance of the requested key as per the contract
userDataLoader.load("A");
userDataLoader.load("B");
userDataLoader.load("A");
userDataLoader.dispatch();
// will result in keys to the batch loader with [ "A", "B", "A" ]
More complex cache behavior can be achieved by calling .clear()
or .clearAll()
rather than disabling the cache completely.
Caching errors
If a batch load fails (that is, a batch function returns a rejected CompletionStage), then the requested values will not be cached.
However, if a batch function returns a Try
or Throwable
instance for an individual value, then that will be cached to avoid frequently loading
the same problem object.
In some circumstances you may wish to clear the cache for these individual problems:
userDataLoader.load("r2d2").whenComplete((user, throwable) -> {
if (throwable != null) {
userDataLoader.clear("r2dr");
throwable.printStackTrace();
} else {
processUser(user);
}
});
Statistics on what is happening
DataLoader
keeps statistics on what is happening. It can tell you the number of objects asked for, the cache hit number, the number of objects
asked for via batching and so on.
Knowing what the behaviour of your data is important for you to understand how efficient you are in serving the data via this pattern.
Statistics statistics = userDataLoader.getStatistics();
System.out.println(format("load : %d", statistics.getLoadCount()));
System.out.println(format("batch load: %d", statistics.getBatchLoadCount()));
System.out.println(format("cache hit: %d", statistics.getCacheHitCount()));
System.out.println(format("cache hit ratio: %d", statistics.getCacheHitRatio()));
DataLoaderRegistry
can also roll up the statistics for all data loaders inside it.
You can configure the statistics collector used when you build the data loader
DataLoaderOptions options = DataLoaderOptions.newOptions().setStatisticsCollector(() -> new ThreadLocalStatisticsCollector());
DataLoader<String,User> userDataLoader = DataLoaderFactory.newDataLoader(userBatchLoader,options);
Which collector you use is up to you. It ships with the following: SimpleStatisticsCollector
, ThreadLocalStatisticsCollector
, DelegatingStatisticsCollector
and NoOpStatisticsCollector
.
The scope of a data loader is important
If you are serving web requests then the data can be specific to the user requesting it. If you have user specific data then you will not want to cache data meant for user A to then later give it user B in a subsequent request.
The scope of your DataLoader
instances is important. You will want to create them per web request to ensure data is only cached within that
web request and no more.
If your data can be shared across web requests then use a custom org.dataloader.ValueCache
to keep values in a common place.
Data loaders are stateful components that contain promises (with context) that are likely share the same affinity as the request.
Manual dispatching
The original Facebook DataLoader was written in Javascript for NodeJS.
NodeJS is single-threaded in nature, but simulates asynchronous logic by invoking functions on separate threads in an event loop, as explained in this post on StackOverflow.
NodeJS generates so-call 'ticks' in which queued functions are dispatched for execution, and Facebook DataLoader
uses
the nextTick()
function in NodeJS to automatically dequeue load requests and send them to the batch execution function
for processing.
Here there is an IMPORTANT DIFFERENCE compared to how java-dataloader
operates!!
In NodeJS the batch preparation will not affect the asynchronous processing behaviour in any way. It will just prepare batches in 'spare time' as it were.
This is different in Java as you will actually delay the execution of your load requests, until the moment where you make a
call to dataLoader.dispatch()
.
Does this make Java DataLoader
any less useful than the reference implementation? We would argue this is not the case,
and there are also gains to this different mode of operation:
- In contrast to the NodeJS implementation you as developer are in full control of when batches are dispatched
- You can attach any logic that determines when a dispatch takes place
- You still retain all other features, full caching support and batching (e.g. to optimize message bus traffic, GraphQL query execution time, etc.)
However, with batch execution control comes responsibility! If you forget to make the call to dispatch()
then the futures
in the load request queue will never be batched, and thus will never complete! So be careful when crafting your loader designs.
Scheduled Dispatching
ScheduledDataLoaderRegistry
is a registry that allows for dispatching to be done on a schedule. It contains a
predicate that is evaluated (per data loader contained within) when dispatchAll
is invoked.
If that predicate is true, it will make a dispatch
call on the data loader, otherwise is will schedule a task to
perform that check again. Once a predicate evaluated to true, it will not reschedule and another call to
dispatchAll
is required to be made.
This allows you to do things like "dispatch ONLY if the queue depth is > 10 deep or more than 200 millis have passed since it was last dispatched".
DispatchPredicate depthOrTimePredicate = DispatchPredicate
.dispatchIfDepthGreaterThan(10)
.or(DispatchPredicate.dispatchIfLongerThan(Duration.ofMillis(200)));
ScheduledDataLoaderRegistry registry = ScheduledDataLoaderRegistry.newScheduledRegistry()
.dispatchPredicate(depthOrTimePredicate)
.schedule(Duration.ofMillis(10))
.register("users",userDataLoader)
.build();
The above acts as a kind of minimum batch depth, with a time overload. It won't dispatch if the loader depth is less than or equal to 10 but if 200ms pass it will dispatch.
Other information sources
- Facebook DataLoader Github repo
- Facebook DataLoader code walkthrough on YouTube
- Using DataLoader and GraphQL to batch requests
Contributing
All your feedback and help to improve this project is very welcome. Please create issues for your bugs, ideas and enhancement requests, or better yet, contribute directly by creating a PR.
When reporting an issue, please add a detailed instruction, and if possible a code snippet or test that can be used as a reproducer of your problem.
When creating a pull request, please adhere to the current coding style where possible, and create tests with your code so it keeps providing an excellent test coverage level. PR's without tests may not be accepted unless they only deal with minor changes.
Acknowledgements
This library was originally written for use within a VertX world and it used the vertx-core Future
classes to implement
itself. All the heavy lifting has been done by this project : vertx-dataloader
including the extensive testing (which itself came from Facebook).
This particular port was done to reduce the dependency on Vertx and to write a pure Java 8 implementation with no dependencies and also to use the more normative Java CompletableFuture.
vertx-core is not a lightweight library by any means so having a pure Java 8 implementation is very desirable.
This library is entirely inspired by the great works of Lee Byron and Nicholas Schrock from Facebook whom we would like to thank, and especially @leebyron for taking the time and effort to provide 100% coverage on the codebase. The original set of tests were also ported.
Licensing
This project is licensed under the Apache Commons v2.0 license.
Copyright © 2016 Arnold Schrijver, 2017 Brad Baker and others contributors