Tuesday, 09 June 2009

Blogger and syntax highlighting of code

If you happen to write a lot of tech articles on Blogger and frequently use code examples, it can be frustrating that Blogger has no built-in support for syntax highlighting. After some frustrating initial experiments with my own styles for code snippets, I found this excellent article:

http://abhisanoujam.blogspot.com/2008/12/blogger-syntax-highlighting.html In no time flat, I have converted all of my previous blog posts and they are now properly highlighted, complete with line numbers. :-) Lovely stuff.

Cheers Manik

Tuesday, 02 June 2009

Pimp your desktop with an Infinispan wallpaper!

image The boys and girls on JBoss.org’s creative team have come up with a kick-ass desktop and iPhone wallpaper for Infinispan. Check these out, pimp your desktop today!

Cheers Manik

Tuesday, 02 June 2009

Another alpha for Infinispan

imageYes, Infinispan 4.0.0.ALPHA4 is ready for a sound thrashing.

What’s new? Galder Zamarreño’s recent contribution of ripping out the marshalling framework Infinispan "inherited" from JBoss Cache and replacing it with JBoss Marshalling has made the marshalling code much leaner, more modular and more testable, and comes with a nifty performance boost too. What’s also interesting is that he has overcome issues with object stream caching (see my blog on the subject) by using JBoss Marshalling streams which can be reset. This too provides a very handy performance boost for short lived streams. (See ISPN-42, ISPN-84)

Mircea Markus has put together a bunch of migration scripts to migrate your JBoss Cache 3.x configuration to Infinispan. More migration scripts are on their way. (See ISPN-53, ISPN-54)

Vladimir Blagojevic has contributed the new lock() API - which allows for explicit, eager cluster-wide locks. (See ISPN-48)

Heiko Rupp has contributed the early days of a JOPR plugin, allowing Infinispan instances to be managed by JBoss AS 5.1.0’s embedded console as well as other environments. Read his guide to managing Infinispan with JOPR for more details.

And I’ve implemented some enhancements to the Future API. Now, rather than returning Futures, the xxxAsync() methods return a NotifyingFuture. NotifyingFuture extends Future, adding the ability to register a notifier such that the caller can be notified when the Future completes. Note that Future completion could mean any of successful completion, exception or cancellation, so the listener should check the state of the Future using get() on notification. For example:

NotifyingFuture<Void> f = cache.clearAsync().attachListener(new FutureListener<Void>() {
   public void futureDone(Future<Void> f) {
     if (f.get() && !f.isCancelled()) {
       System.out.println("clear operation succeeded");

The full change log for this release is available on JIRA. Download this release, and provide feedback on the Infinispan user forums.

Onward to Beta1!

Enjoy, Manik

Thursday, 14 May 2009

Alpha3 ready to rumble!

So I’ve just tagged and cut Infinispan 4.0.0.ALPHA3. (Why are we starting with release 4.0.0? Read our FAQs!)

As I mentioned recently, I’ve implemented an uber-cool new asynchronous API for the cache and am dying to show it off/get some feedback on it. Yes, Alpha3 contains the async APIs. Why is this so important? Because it allows you to get the best of both worlds when it comes to synchronous and asynchronous network communications, and harnesses the parallelism and scalability you’d naturally expect from a halfway-decent data grid. And, as far as I know, we’re the first distributed cache - open or closed source - to offer such an API.

The release also contains other fixes, performance and stability improvements, and better javadocs throughout. One step closer to a full release.

Enjoy the release - available on our download page - and please do post feedback on the Infinispan User Forums.

Cheers Manik

Wednesday, 13 May 2009

What's so cool about an asynchronous API?

Inspired by some thoughts from a recent conversation with JBoss Messaging’s Tim Fox, I’ve decided to go ahead and implement a new, asynchronous API for Infinispan.

To sum things up, this new API - additional methods on Cache - allow for asynchronous versions of put(), putIfAbsent(), putAll(), remove(), replace(), clear() and their various overloaded forms. Unimaginatively called putAsync(), putIfAbsentAsync(), etc., these new methods return a Future rather than the expected return type. E.g.,

V put(K key, V value);
  Future<V> putAsync(K key, V value);
  boolean remove(K key, V value);
  Future<Boolean> removeAsync(K key, V value);
  void clear();
  Future<Void> clearAsync();
  // ... etc ...

You guessed it, these methods do not block. They return immediately, and how cool is that! If you care about return values - or indeed simply want to wait until the operation completes - you do a Future.get(), which will block until the call completes. Why is this useful? Mainly because, in the case of clustered caches, it allows you to get the best of both worlds when it comes to synchronous and asynchronous mode transports.

Synchronous transports are normally recommended because of the guarantees they offer - the caller always knows that a call has properly propagated across the network, and is aware of any potential exceptions. However, asynchronous transports give you greater parallelism. You can start on the next operation even before the first one has made it across the network. But this is at a cost: losing out on the knowledge that a call has safely completed.

With this powerful new API though, you can have your cake and eat it too. Consider:

Cache<String, String> cache = getCache();
  Future<String> f1 = cache.putAsync(k1, v1);
  Future<String> f2 = cache.putAsync(k2, v2);
  Future<String> f3 = cache.putAsync(k3, v3);

The network calls - possibly the most expensive part of a clustered write - involved for the 3 put calls can now happen in parallel. This is even more useful if the cache is distributed, and k1, k2 and k3 map to different nodes in the cluster - the processing required to handle the put operation on the remote nodes can happen simultaneously, on different nodes. And all the same, when calling Future.get(), we block until the calls have completed successfully. And we are aware of any exceptions thrown. With this approach, elapsed time taken to process all 3 puts should - theoretically, anyway - only be as slow as the single, slowest put().

This new API is now in Infinispan’s trunk and yours to enjoy. It will be a main feature of my next release, which should be out in a few days. Please do give it a spin - I’d love to hear your thoughts and experiences.

back to top