Friday, 12 August 2016
Infinispan Cloud Cachestore 8.0.1.Final
After bringing the MongoDB up-to-date a few days ago, this time it’s the turn of the Cloud Cache Store, our JClouds-based store which allows you to use any of the JClouds BlobStore providers to persist your cache data. This includes AWS S3, Google Cloud Storage, Azure Blob Storage and Rackspace Cloud Files. In a perfect world this would have been 8.0.0.Final, but Sod’s law rules, so I give you 8.0.1.Final instead :) So head on over to our store download page and try it out.
The actual configuration of the cachestore depends on the provider, so refer to the JClouds documentation. The following is a programmatic example using the "transient" provider:
And this is how you’d configure it declaratively:
This will work with any Infinispan 8.x release.
Enjoy !
Tags: release jclouds cloud storage cache store
Thursday, 13 May 2010
Client/Server architectures strike back, Infinispan 4.1.0.Beta1 is out!
I’m delighted to announce the release of Infinispan 4.1.0.BETA1. For this, our first beta release of the 4.1 series, we’ve finished Hot Rod and Memcached based server implementations and a Java-based Hot Rod client has been developed as a reference implementation. Starting with 4.1.0.BETA1 as well, thanks to help of Tom Fenelly, Infinispan caches can be exposed over a WebSocket.
A detailed change log is available and the release is downloadable from the usual place.
For the rest of the blog post, we’d like to share some of the objectives of Infinispan 4.1 with the community. Here at ‘chez Infinispan’ we’ve been repeating the same story over and over again: http://www.parleys.com/sl=1&st=5&id=1589[‘Memory is the new Disk, Disk is the new Tape’] and this release is yet another step to educate the community on this fact. Client/Server architectures based around [#SPELLING_ERROR_12 .blsp-spelling-error]#Infinispan data grids are key to enabling this reality but in case you might be wondering, why would someone use Infinispan in a client/server mode compared to using it as peer-to-peer (p2p) mode? How does the client/server architecture enable memory to become the new disk?
Broadly speaking, there three areas where a Infinispan client/server architecture might be chosen over p2p one:
1. Access to Infinispan from a non-JVM environment
Infinispan’s roots can be traced back to JBoss Cache, a caching library developed to provide J2[SPELLING_ERROR_19 .blsp-spelling-error]#EE application servers with data replication. As such, the primary way of accessing Infinispan or JBoss Cache has always been via direct calls coming from the same JVM. However, as we have repeated it before, Infinispan’s goal is to provide much more than that, it aims to provide data grid access to any software application that you can think of and this obviously requires Infinispan to enable access from non-Java environments.
Infinispan comes with a series of server modules that enable that precisely. All you have to do is decide which API suits your environment best. Do you want to enable access direct access to Infinispan via HTTP? Just use our REST or WebSocket modules. Or is it the case that you’re looking to expand the capabilities of your Memcached based applications? Start an Infinispan-backed and your existing Memcached clients will be able to talk to it immediately. Or maybe even you’re interested in accessing Infinispan via Hot Rod? Then, gives us a hand developing non-Java clients that can talk the Hot Rod protocol! :).
2. Infinispan as a dedicated data tier
Quite often applications running running a p2p environment have caching requirements larger than the heap size in which case it makes a lot of sense to separate caching into a separate dedicated tier.
It’s also very common to find businesses with varying work loads overtime where there’s a need to start business processing servers to deal with increased load, or stop them when work load is reduced to lower power consumption. When Infinispan data grid instances are deployed alongside business processing servers, starting/stopping these can be a slow process due to state transfer, or rehashing, particularly when large data sets are used. Separating Infinispan into a dedicated tier provides faster and more predictable server start/stop procedures – ideal for modern cloud-based deployments where elasticity in your application tier is important.
It’s common knowledge that optimizations for large memory usage systems compared to optimizations for CPU intensive systems are very different. If you mix both your data grid and business logic under the same roof, finding a balanced set of optimizations that keeps both sides happy is difficult. Once again, separating the data and business tiers can alleviate this problem.
You might be wondering that if Infinispan is moved to a separate tier, access to data now requires a network call and hence will hurt your performance in terms of time per call. However, separating tiers gives you a much more scalable architecture and your data is never more than 1 network call away. Even if the dedicated Infinispan data grid is configured with distribution, a Hot Rod smart-client implementation - such as the Java reference implementation shipped with Infinispan 4.1.0.BETA1 - can determine where a particular key is located and hit a server that contains it directly.
3. Data-as-a-Service (DaaS)
Increasingly, we see scenarios where environments host a multitude of applications that share the need for data storage, for example in Plattform-as-a-Service cloud-style environments (whether public or internal). In such configurations, you don’t want to be launching a data grid per each application since it’d be a nightmare to maintain – not to mention resource-wasteful. Instead you want deployments or applications to start processing as soon as possible. In these cases, it’d make a lot of sense to keep a pool of Infinispan data grid nodes acting as a shared storage tier. Isolated cache access could easily achieved by making sure each application uses a different cache name (i.e. the application name could be used as cache name). This can easily achieved with protocols such as Hot Rod where each operation requires a cache name to be provided.
Regardless of the scenarios explained above, there’re some common benefits to separating an Infinispan data grid from the business logic that accesses it. In fact, these are very similar to the benefits achieved when application servers and databases don’t run under the same roof. By separating the layers, you can manage each layer independently, which means that adding/removing nodes, maintenance, upgrades…etc can be handled independently. In other words, if you wanna upgrade your application server or servlet container, you don’t need to bring down your data layer.
All of this is available to you now, but the story does not end here. Bearing in mind that these client/server modules are based around reliable TCP/IP, using Netty, they could also in the future form the base of new functionality. For example, client/server modules could be linked together to connect geographically separated Infinispan data grids and enable different disaster recovery strategies.
So, download Infinispan 4.1.0.BETA1 righty, give a try to these new modules and let us know your thoughts.
Finally, don’t forget that we’ll be talking about Hot Rod in Boston at the end of June for the first ever JUDCon. Don’t miss out!
Cheers,
Galder
Tags: hotrod websocket memcached rest cloud storage
Thursday, 04 February 2010
Infinispan and storage in the cloud
I will be presenting on Infinispan and its role in cloud storage, at Red Hat’s Cloud Computing Forum on the 10th of February 2010.
This is a virtual event, where you get to attend from the comfort of your desk. And although it is free, you do need to register beforehand so I recommend your doing so.
Cheers
Manik
Tags: presentations cloud storage