Friday, 18 February 2011
5.0.0.ALPHA3 is here with more goodies!
For those who can’t wait to get hold on the newest Infinispan features, we’ve just released the third alpha release of the 5.0 series, called 5.0.0.ALPHA3.
Apart from including fixes in 4.2.1.CR2, it allows users to prefetch data in advance in parallel thanks to the new getAsync() method. Why is this useful? Imagine the following scenario: A cache is configured with distribution and an app that requires values associated with k1, k2, and k3 in order to do its job. In the worst case scenario assume that all these keys located in remote nodes. Previously, before 5.0.0.ALPHA3, you’d have written something like this:
Value v1 = cache.get("k1");
Value v2 = cache.get("k2");
Value v3 = cache.get("k3");
The problem with this code is that each get() is sequential, so the second get() call must wait for first get() to retrieve data from the remote node before it can execute. This is clearly not very optimal. From 5.0.0.ALPHA3 onwards, you can do this instead:
NotifyingFuture<Value> f1 = cache.getAsync("k1");
NotifyingFuture<Value> f2 = cache.getAsync("k2");
NotifyingFuture<Value> f3 = cache.getAsync("k3");
...
Value v1 = f1.get(); // blocks until value has been returned
Value v2 = f2.get();
Value v3 = f3.get();
The clear advantage here is that the get requests, via getAsync(), can be fired in parallel and they don’t need to wait for each other. Afterwards, when the value is needed, simply call get() on the future received.
Amongst other API enhancements, such as RemoteCache implementing size() and isEmpty(), or the ability to delete AtomicMap instances via AtomicMapLookup, it’s worth highlighting that we’ve taken your feedback on board with regards to how Infinispan is configured programmatically, and from 5.0.0.ALPHA3 onwards, we provide a more fluent way of configuring Infinispan. For example:
GlobalConfiguration gc = new GlobalConfiguration();
GlobalJmxStatisticsConfig jmxStatistics = gc.configureGlobalJmxStatistics();
jmxStatistics.allowDuplicateDomains(true).enabled(true);
TransportConfig transport = gc.configureTransport();
transport.clusterName("blah").machineId("id").rackId("rack").strictPeerToPeer(true);
You can find more examples of this new configuration here. Note that this fluent API is likely to evolve further before we go final with 5.0.0 as shown in this forum discussion, but please keep your feedback coming! Finally, details of what’s fixed is in the release notes. As always, please use the user forums to report back, grab the release here, enjoy and keep the feedback coming.
Cheers, Galder
Tags: asynchronous alpha configuration
Wednesday, 12 August 2009
Coalesced Asynchronous Cache Store
As we prepare for Infinispan’s beta release, let me introduce to you one of the recent enhancements implemented which improves the way the current asynchronous (or write-behind) cache store works.
Right until now, the asynchronous cache store simply queued modifications, while a set of threads would apply them. However, if the queue contained N put operations on the same key, these threads would apply each and every modification one after the other, which is not very efficient.
Thanks to the excellent feedback from the Infinispan community, we’ve now improved the asynchronous cache store so that it coalesces changes and only applies the latest modification on a key. So, if N put operations on the same key are queued, only the last modification will be applied to the cache store.
Internally, the asynchronous concurrent queueing mechanism used performs in O(1) by keeping an map with the latest values for each key. So, this maps acts like the queue but there’s a not a need for a queue as such, we only care about making sure the latest values are stored hence, order is not important.
Note that the way threads apply these modifications is that they start working as soon as there are any changes available and so to see these changes coalesced, the system needs to be relatively busy or a lot of changes on the same key need to happen in a relatively short period of time. We could have made these threads work periodically, i.e. every X seconds, but by doing that, we would be letting modifications pile up and the time between operations and the cache store updates would go up, hence increasing the chance that the cache store is outdated.
Finally, there’s no configuration modifications required to get the asynchronous cache store to work in the coalesced way, it just works like this out-of-the-box. Example:
<?xml version="1.0" encoding="UTF-8"?>
<infinispan xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:infinispan:config:4.0">
<namedCache name="persistentCache">
<loaders passivation="false" shared="false" preload="true">
<loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true" ignoreModifications="false" purgeOnStartup="false">
<properties>
<property name="location" value="/tmp"/>
</properties>
<async enabled="true" threadPoolSize="10"/>
</loader>
</loaders>
</namedCache>
</infinispan>
Tags: cache stores asynchronous
Thursday, 14 May 2009
Alpha3 ready to rumble!
So I’ve just tagged and cut Infinispan 4.0.0.ALPHA3. (Why are we starting with release 4.0.0? Read our FAQs!)
As I mentioned recently, I’ve implemented an uber-cool new asynchronous API for the cache and am dying to show it off/get some feedback on it. Yes, Alpha3 contains the async APIs. Why is this so important? Because it allows you to get the best of both worlds when it comes to synchronous and asynchronous network communications, and harnesses the parallelism and scalability you’d naturally expect from a halfway-decent data grid. And, as far as I know, we’re the first distributed cache - open or closed source - to offer such an API.
The release also contains other fixes, performance and stability improvements, and better javadocs throughout. One step closer to a full release.
Enjoy the release - available on our download page - and please do post feedback on the Infinispan User Forums.
Cheers Manik
Tags: release asynchronous