What's so cool about an asynchronous API?
Inspired by some thoughts from a recent conversation with JBoss Messaging’s Tim Fox, I’ve decided to go ahead and implement a new, asynchronous API for Infinispan.
To sum things up, this new API - additional methods on Cache - allow for asynchronous versions of put(), putIfAbsent(), putAll(), remove(), replace(), clear() and their various overloaded forms. Unimaginatively called putAsync(), putIfAbsentAsync(), etc., these new methods return a Future rather than the expected return type. E.g.,
V put(K key, V value);
Future<V> putAsync(K key, V value);
boolean remove(K key, V value);
Future<Boolean> removeAsync(K key, V value);
void clear();
Future<Void> clearAsync();
// ... etc ...
You guessed it, these methods do not block. They return immediately, and how cool is that! If you care about return values - or indeed simply want to wait until the operation completes - you do a Future.get(), which will block until the call completes. Why is this useful? Mainly because, in the case of clustered caches, it allows you to get the best of both worlds when it comes to synchronous and asynchronous mode transports.
Synchronous transports are normally recommended because of the guarantees they offer - the caller always knows that a call has properly propagated across the network, and is aware of any potential exceptions. However, asynchronous transports give you greater parallelism. You can start on the next operation even before the first one has made it across the network. But this is at a cost: losing out on the knowledge that a call has safely completed.
With this powerful new API though, you can have your cake and eat it too. Consider:
Cache<String, String> cache = getCache();
Future<String> f1 = cache.putAsync(k1, v1);
Future<String> f2 = cache.putAsync(k2, v2);
Future<String> f3 = cache.putAsync(k3, v3);
f1.get();
f2.get();
f3.get();
The network calls - possibly the most expensive part of a clustered write - involved for the 3 put calls can now happen in parallel. This is even more useful if the cache is distributed, and k1, k2 and k3 map to different nodes in the cluster - the processing required to handle the put operation on the remote nodes can happen simultaneously, on different nodes. And all the same, when calling Future.get(), we block until the calls have completed successfully. And we are aware of any exceptions thrown. With this approach, elapsed time taken to process all 3 puts should - theoretically, anyway - only be as slow as the single, slowest put().
This new API is now in Infinispan’s trunk and yours to enjoy. It will be a main feature of my next release, which should be out in a few days. Please do give it a spin - I’d love to hear your thoughts and experiences.
Get it, Use it, Ask us!
We’re hard at work on new features, improvements and fixes, so watch this space for more announcements!Please, download and test the latest release.
The source code is hosted on GitHub. If you need to report a bug or request a new feature, look for a similar one on our JIRA issues tracker. If you don’t find any, create a new issue.
If you have questions, are experiencing a bug or want advice on using Infinispan, you can use GitHub discussions. We will do our best to answer you as soon as we can.
The Infinispan community uses Zulip for real-time communications. Join us using either a web-browser or a dedicated application on the Infinispan chat.