Client/Server architectures strike back, Infinispan 4.1.0.Beta1 is out!
I’m delighted to announce the release of Infinispan 4.1.0.BETA1. For this, our first beta release of the 4.1 series, we’ve finished Hot Rod and Memcached based server implementations and a Java-based Hot Rod client has been developed as a reference implementation. Starting with 4.1.0.BETA1 as well, thanks to help of Tom Fenelly, Infinispan caches can be exposed over a WebSocket.
A detailed change log is available and the release is downloadable from the usual place.
For the rest of the blog post, we’d like to share some of the objectives of Infinispan 4.1 with the community. Here at ‘chez Infinispan’ we’ve been repeating the same story over and over again: http://www.parleys.com/sl=1&st=5&id=1589[‘Memory is the new Disk, Disk is the new Tape’] and this release is yet another step to educate the community on this fact. Client/Server architectures based around [#SPELLING_ERROR_12 .blsp-spelling-error]#Infinispan data grids are key to enabling this reality but in case you might be wondering, why would someone use Infinispan in a client/server mode compared to using it as peer-to-peer (p2p) mode? How does the client/server architecture enable memory to become the new disk?
Broadly speaking, there three areas where a Infinispan client/server architecture might be chosen over p2p one:
1. Access to Infinispan from a non-JVM environment
Infinispan’s roots can be traced back to JBoss Cache, a caching library developed to provide J2[SPELLING_ERROR_19 .blsp-spelling-error]#EE application servers with data replication. As such, the primary way of accessing Infinispan or JBoss Cache has always been via direct calls coming from the same JVM. However, as we have repeated it before, Infinispan’s goal is to provide much more than that, it aims to provide data grid access to any software application that you can think of and this obviously requires Infinispan to enable access from non-Java environments.
Infinispan comes with a series of server modules that enable that precisely. All you have to do is decide which API suits your environment best. Do you want to enable access direct access to Infinispan via HTTP? Just use our REST or WebSocket modules. Or is it the case that you’re looking to expand the capabilities of your Memcached based applications? Start an Infinispan-backed and your existing Memcached clients will be able to talk to it immediately. Or maybe even you’re interested in accessing Infinispan via Hot Rod? Then, gives us a hand developing non-Java clients that can talk the Hot Rod protocol! :).
2. Infinispan as a dedicated data tier
Quite often applications running running a p2p environment have caching requirements larger than the heap size in which case it makes a lot of sense to separate caching into a separate dedicated tier.
It’s also very common to find businesses with varying work loads overtime where there’s a need to start business processing servers to deal with increased load, or stop them when work load is reduced to lower power consumption. When Infinispan data grid instances are deployed alongside business processing servers, starting/stopping these can be a slow process due to state transfer, or rehashing, particularly when large data sets are used. Separating Infinispan into a dedicated tier provides faster and more predictable server start/stop procedures – ideal for modern cloud-based deployments where elasticity in your application tier is important.
It’s common knowledge that optimizations for large memory usage systems compared to optimizations for CPU intensive systems are very different. If you mix both your data grid and business logic under the same roof, finding a balanced set of optimizations that keeps both sides happy is difficult. Once again, separating the data and business tiers can alleviate this problem.
You might be wondering that if Infinispan is moved to a separate tier, access to data now requires a network call and hence will hurt your performance in terms of time per call. However, separating tiers gives you a much more scalable architecture and your data is never more than 1 network call away. Even if the dedicated Infinispan data grid is configured with distribution, a Hot Rod smart-client implementation - such as the Java reference implementation shipped with Infinispan 4.1.0.BETA1 - can determine where a particular key is located and hit a server that contains it directly.
3. Data-as-a-Service (DaaS)
Increasingly, we see scenarios where environments host a multitude of applications that share the need for data storage, for example in Plattform-as-a-Service cloud-style environments (whether public or internal). In such configurations, you don’t want to be launching a data grid per each application since it’d be a nightmare to maintain – not to mention resource-wasteful. Instead you want deployments or applications to start processing as soon as possible. In these cases, it’d make a lot of sense to keep a pool of Infinispan data grid nodes acting as a shared storage tier. Isolated cache access could easily achieved by making sure each application uses a different cache name (i.e. the application name could be used as cache name). This can easily achieved with protocols such as Hot Rod where each operation requires a cache name to be provided.
Regardless of the scenarios explained above, there’re some common benefits to separating an Infinispan data grid from the business logic that accesses it. In fact, these are very similar to the benefits achieved when application servers and databases don’t run under the same roof. By separating the layers, you can manage each layer independently, which means that adding/removing nodes, maintenance, upgrades…etc can be handled independently. In other words, if you wanna upgrade your application server or servlet container, you don’t need to bring down your data layer.
All of this is available to you now, but the story does not end here. Bearing in mind that these client/server modules are based around reliable TCP/IP, using Netty, they could also in the future form the base of new functionality. For example, client/server modules could be linked together to connect geographically separated Infinispan data grids and enable different disaster recovery strategies.
So, download Infinispan 4.1.0.BETA1 righty, give a try to these new modules and let us know your thoughts.
Finally, don’t forget that we’ll be talking about Hot Rod in Boston at the end of June for the first ever JUDCon. Don’t miss out!
Cheers,
Galder
News
2020-09-08 Infinispan 12.0.0.Dev03
2020-08-31 Non Blocking Saga
2020-08-28 The developer Conference Sao Paulo
2020-07-28 Infinispan Server Tutorial
2020-07-27 Infinispan 12.0.0.Dev01
Tags
Tags: hotrod websocket memcached rest cloud storage