Step 5: Clustering the application

In this step we will unleash the power of Infinispan's clustering, showing how it can be used to share data among multiple nodes. These nodes can either be on the same physical host, or running in dedicated VMs or on separate machines. Obviously, for the purposes of this tutorial, running all nodes on the same host is easiest since it doesn't require messing around with networks and firewalls. To achieve this, just launch the application from separate terminal instances. For best results use a terminal which supports vertical splitting (such as Terminator on Linux, iTerm2 on OSX or ConEmu on Windows).

In order to turn a CacheManager into a clustered one, we need to supply a special "global" configuration to it with an enabled transport:

GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder();
global.transport().clusterName("WeatherApp");
cacheManager = new DefaultCacheManager(global.build());
We went the easy route by using the convenience defaultClusteredBuilder() method, but we can achieve the same result by constructing a plain GlobalConfigurationBuidler and tweaking the necessary parameters. Making the CacheManager clustered is not enough: we also want a clustered cache. For this example we will use a distributed synchronous cache with two owners (the default), by changing the default cache configuration:
config.clustering().cacheMode(CacheMode.DIST_SYNC);
Since the entries we will be inserting will need to be transferred between the nodes over the network, we need to make sure both keys and values implement the java.io.Serializable interface. The entries will be distributed among the cluster members according to the hash of their keys. Since we asked for two owners, each entry will have a primary owner and a backup owner. It is now time to fire up your multiple terminals and try this for yourself:
$ git checkout -f step-5
$ mvn clean package exec:exec # from terminal 1
$ mvn exec:exec # from terminal 2
Make sure you start the second instance soon after the first one has started, because we have not told either instance to "wait" for the other. While the end result will be similar to the previous run (obviously multiplied by two) you will see some extra logging related to the node discovery. We want to control this behaviour, so we'll be looking at listeners in the next step.

back to top