Monday, 25 November 2019
Dear Infinispan community,
we know you are happy with the new shining 10.0.0 Infinispan release, but if you are among those who are missing a new operator version for safely running your Infinispan Chupachabra in the clound, this post is for you!
This is our first blog post about 1.0.x operator series (yeah, sorry 1.0.0 we forgot about you) and as you can notice there’s no Alpha, Beta or CR label at the end of the release tag. This is because OperatorHub and Openshift Catalog only allow numerical version like Maj.Min.Mic and instead of labels we now use the channel to indicate the stability of a release. We have 2 live channels at the moment for the Infinispan operator:
stable is 0.3.2 which is for the 9.x Infinispan cluster and current
dev-preview is 1.0.1 which works with 10.x clusters.
New Infinispan image configuration: we cleaned up the image configuration process: instead of rely on a large set of env variables, now the operator configures the Infinispan image via a single .yaml file.
Container configurability: CR .yaml file lets you configure memory and CPU (and also extras Java opts) assigned to the container;
Encryption: TLS can be setup providing TLS certificates or using platform service as the Openshift seriving certs service (TLS will be on by default in the next release);
We now have some good docs: https://infinispan.org/infinispan-operator/master/operator.html;
Project README has been also improved: https://github.com/infinispan/infinispan-operator/blob/1.0.1/README.md;
The Infinispan Operator 1.0.1 works on Kind/Kubernetes 1.16 (CI) and Openshift 3.11, 4.x (developed on). You can install it:
manually, follow the README;
with OLM on Kubernetes, https://operatorhub.io/operator/infinispan/dev-preview/infinispan-operator.v1.0.0
with OLM from the Openshift Operator Catalog
And remember: it’s a dev-preview release, you can have a lot of fun with it!
As usual source code is open at: https://github.com/infinispan/infinispan-operator. You can see what’s going on, comment the code or the new pull requests, ask for new features and also develop them!
Thanks for following us, Infinispan
Tags: dev-preview release
Monday, 18 November 2019
Dear Infinispan community,
Quick on the heels of Infinispan 10.0 here comes the first Beta or 10.1.
This release closes the gap between the legacy server and the new server we introduced in 10.0. In particular:
The reworked console (which will be described in detail in an upcoming series of blog posts)
Kerberos authentication for both Hot Rod (GSSAPI, GS2) and HTTP/Rest (SPNEGO)
Query and indexing operations/stats are now exposed over the RESTful API
Tasks and Scripting support
More work has landed on the quest to completely remove blocking calls from our internals. The following have been made non-blocking:
the size operation
cache stream ops with primitive types
Additionally caches now have a reactive Publisher which is intended as a fully non-blocking approach to distributed operations.
Over 40 bug fixes. See the full list of changes and fixes
Tags: beta release
Monday, 11 November 2019
One of the biggest changes in Infinispan 10 is the new server, which replaces the WildFly-based server we had been using up until 9.x.
This is the first of a series of blog posts which will describe the new server, how to use it, how to configure it and how to deploy it in your environment. More specifically, this post will focus mostly on the reasons behind the change, while the next ones will be of a more practical nature.
Infinispan has had a server implementing the Hot Rod protocol since 4.1. Originally it was just a main class which bootstrapped the server protocol. It was configured via the same configuration file used by the embedded library, it had no security and only handled Hot Rod.
Over time both a RESTful HTTP and a Memcached protocol were added and could be bootstrapped in the same way.
While the server bootstrap code was trivial, it was not going to scale to support all the things we needed (security, management, provisioning, etc). We therefore decided to build our next server on top of the very robust foundation provided by WildFly (aka, the application server previously known as JBoss AS 7), which made its first appearance in 5.3.
Integration with WildFly’s management model was not trivial but it gave us all of the things we were looking for and more, such as deployments, data sources, CLI, console, etc. It also came with a way to provision multiple nodes and manage them from a central controller, i.e. domain mode. All of these facilities however came at the cost of a lot of extra integration code to maintain as well as a larger footprint, both in terms of memory and storage use, caused by a number of subsystems which we had to carry along, even though we didn’t use them directly.
Fast-forward several versions, and the computing landscape has changed considerably: services are containerized, they are provisioned and managed via advanced orchestration tools like Kubernetes or via configuration management tools like Ansible and the model we were using was overlapping (if not altogether clashing) with the container model, where global configuration is immutable and managed externally.
With the above in mind, we have therefore decided to reboot our server implementation. During planning and development it has been known affectionately as ServerNG, but nowadays it is just the Infinispan Server. The WildFly-based server is now the legacy server.
The new server separates global configuration (clustering, endpoints, security) from the configuration of dynamic resources like caches, counters, etc. This means that global configuration can be made immutable while the ,mutable configuration is stored separately in the global persistence location. In a containerized environment you will place the persistence location onto a volume that will survive restarts.
Starting a two-node cluster using the latest version of the server image is easy:
$ docker run --name ispn1 --hostname ispn1 -e USER=admin -e PASS=admin -p 11222:11222 infinispan/server $ docker run --name ispn2 --hostname ispn2 -e USER=admin -e PASS=admin -p 11322:11222 infinispan/server
The two nodes will discover each other, as can be seen from the logs:
15:58:21,201 INFO [org.infinispan.CLUSTER] (jgroups-5,ispn-1-42736) ISPN000094: Received new cluster view for channel infinispan: [ispn-1-42736|1] (2) [ispn-1-42736, ispn-2-51789] 15:58:21,206 INFO [org.infinispan.CLUSTER] (jgroups-5,ispn-1-42736) ISPN100000: Node ispn-2-51789 joined the cluster
Next we will connect to the cluster using the CLI:
$ docker run -it --rm infinispan/server /opt/infinispan/bin/cli.sh [disconnected]> connect http://172.17.0.2:11222 Username: admin Password: ***** [ispn-1-42736@infinispan//containers/DefaultCacheManager]>
Next we will create a distributed cache and select it for future operations:
[ispn-1-42736@infinispan//containers/DefaultCacheManager]> create cache --template=org.infinispan.DIST_SYNC distcache [ispn-1-42736@infinispan//containers/DefaultCacheManager]> cache distcache [ispn-1-42736@infinispan//containers/DefaultCacheManager/caches/distcache]>
Let’s insert some data now:
[ispn-1-42736@infinispan//containers/DefaultCacheManager/caches/distcache]> put k1 v1 [ispn-1-42736@infinispan//containers/DefaultCacheManager/caches/distcache]> put k2 v2 [ispn-1-42736@infinispan//containers/DefaultCacheManager/caches/distcache]> ls k2 k1 [ispn-1-42736@infinispan//containers/DefaultCacheManager/caches/distcache]> get k1 v1
Now let’s use the RESTful API to fetch one of the entries:
$ curl --digest -u admin:admin http://localhost:11222/rest/v2/caches/distcache/k2 v2
Since we didn’t map persistent volumes to our containers, both the cache and its contents will be lost when we terminate the containers.
In the next blog post we will look at configuration and persistence in more depth.
Friday, 01 November 2019
Tags: final release
Monday, 28 October 2019
Dear Infinispan community,
We are very pleased to announce the release of Infinispan 10.0 codenamed “Chupacabra”! We have been busy making many changes over the last months.
Infinispan 10 features a brand new server, replacing the WildFly-based server we’ve had since 5.3 with a smaller, leaner implementation. Here are the highlights:
Reduced disk (50MB vs 170MB) and memory footprint (18MB vs 40MB at boot)
Simpler to configure, since it shares the configuration schema with embedded with server-specific extensions
Single-port design: the Hot Rod, REST and management endpoint are now served through a single port (11222) with automatic protocol detection between HTTP/1.1, HTTP/2 and Hot Rod. The memcached endpoint is handled separately since we don’t implement the binary protocol yet.
New CLI with data manipulation operations
New REST-based API for administration
Security implemented using WildFly Elytron:
Hot Rod authentication support for PLAIN, DIGEST-MD5, SCRAM, EXTERNAL, OAUTHBEARER
HTTP authentication support for BASIC, DIGEST, CLIENT_CERT and TOKEN
Properties, Certificate Store and LDAP realms
Integration with KeyCloak
Caches/counters are created and managed dynamically through Hot Rod / REST
Because of the amount of restructuring, the web-based Console is not yet available in this release. We are working on it and it will be included in 10.1.
A new API (v2) was introduced and users are encouraged to migrate their applications from the old API.
The v2 API offers a completely redesigned endpoint, including dozens of new operations. Besides allowing to manage caches, it also covers cache containers, counters, cross-site replication, servers and clusters.
Apart from the new API, the REST server is now fully non-blocking and also has better performance than 9.4.x. It also fully supports authorization.
The internal marshalling capabilities of Infinispan have undergone a significant refactoring in 10.0. The marshalling of internal Infinispan objects and user objects are now truly isolated. This means that it’s now possible to configure Marshaller implementations in embedded mode or on the server, without having to handle the marshalling of Infinispan internal classes. Consequently, it’s possible to easily change the marshaller implementation user for user types, in a similar manner to how users of the HotRod client are accustomed.
As a consequence of the above changes, the default marshaller used for marshalling user types is no longer based upon JBoss Marshalling. Instead we now utilise the ProtoStream library to store user types in the language agnostic Protocol Buffers format. The ProtoStream library provides several advantages over jboss-marshalling, most notably it does not make use of reflection and so is more suitable for use in AOT environments such as Quarkus.
The persistence SPI has had some much needed TLC, with several deprecations and additions. The aim of this work was to ensure that internal Infinispan classes were no longer leaking into the SPI, in order to ensure that custom store implementations only have to be concerned with their data, not internal Infinispan objects.
Stores by default are now segmented when the segmented attribute is not set. A segmented store allows for greater iteration performance and less memory usage. This is useful for things such as state transfer and other operations that require an entire view of the cache (iteration, size, mass indexer distributed streams etc). All of our provided stores now provided being segmented; these include file store, soft index file store, rocks db, jdbc and remote stores.
To accommodate our brand new server, Infinispan 10.0 also introduces a completely new container image which is much smaller than the old one (366MB vs 684MB) and supports the following features:
Red Hat’s Minimal Universal Base Image based
Simple yaml configuration
Authentication (Enabled by default)
The new image can be pulled from any of the following repositories:
Infinispan has adopted the MicroProfile Metrics ver. 2.0.2 specification and uses the SmallRye Metrics implementation. MicroProfile Metrics allows applications to gather various metrics and statistics that provide insights into what is happening inside an Infinispan cluster.
The current offering includes both cache container and cache level Gauge type metrics. Histograms and Timers will arrive in the next release of the 10.x stream.
The metrics can be read remotely at the well-known /metrics REST endpoint and use JSON format or optionally the OpenMetrics format, so that they can be processed, stored, visualized and analyzed by compatible tools such as Prometheus.
But rest assured, the existing JMX support for metrics has not been superseded by REST. JMX is still alive and kicking and we plan to continue developing it and have it available on all runtimes that support it (Quarkus being the notable exception).
Logging categories for the major subsystems have been introduced (CLUSTER, CONTAINER, PERSISTENCE, SERVER, etc) so that it easier to understand what they refer to. The server also comes with a JSON logger for easy integration with tools such as Fluentd or the ELK stack.
Infinispan is an official extension in Quarkus! If you wish to find out more about Quarkus you can find it at https://quarkus.io/.
We have a very featureful client extension allowing your Quarkus apps to connect to a remote server with lots of the features you are used to: querying, authentication, encryption, counter, dependency injection and others. We recently added support for protostream based annotation marshalling. If you are curious you can find the code at https://github.com/quarkusio/quarkus/tree/master/extensions/infinispan-client.
The Infinispan embedded extension was also just added, but has limited functionality due to its infancy. Although it will allow you to run an embedded clustered cache in a native executable. If you are curious you can find the code at https://github.com/quarkusio/quarkus/tree/master/extensions/infinispan-embedded.
The Infinispan team has also started adding a standalone project to have a Quarkus based Infinispan Server using Infinispan 10 and newer. This is still a work in progress, but the new repository can be found at https://github.com/infinispan/infinispan-quarkus-server.
Quarkus has a different release cycle than Infinispan, so watch out for more improvements over the following weeks !
Async mode cross-site replication received 3 major improvements: Concurrent requests (i.e. write on different keys for example) will be handled simultaneously instead of sequentially. Asynchronous mode is now able to detect disconnections between sites and bring the site offline based on <take-offline> configuration (ISPN-10180) Tracks and exposes some metrics for asynchronous requests (ISPN-9457)
Infinispan’s internal dependency-injection has been completely rewritten so that factories, components and dependencies are discovered and resolved at compile time instead of using runtime reflection. This, together with the marshalling changes and recent JGroups changes, paves the way for usage and native compilation with Quarkus.
Several internal subsystems have been rewritten to be non-blocking, meaning that they will not hold-on to threads while waiting for I/O:
Non-blocking Hot Rod authentication (ISPN-9841)
Non-blocking REST endpoint (ISPN-10210)
Update internal remote listener code to support non blocking (ISPN-9716)
Update internal embedded listeners to be non blocking (ISPN-9715)
Passivation throughput is increased as well as these operations are done asynchronously.
In addition cache stores have been made non blocking for the cases of loading an entry and storing into the data container as well write skew checks. With this we should be at a point where we can start consolidating thread pools, so keep a look-out in the upcoming releases.
Distributed Streams utilizing a terminal operator that returns a single value use non blocking communication methods (ISPN-9813)
Off Heap has added a few improvements to increase performance and reduce memory usage.
Iteration imrpovements (ISPN-10574)
Removes the need for the address count configuration option
Dynamically resize underlying bucket
Reorder bucket iteration to more CPU friendly, less lock acquisiations as well
StampedLock instead of ReadWriteLock (ISPN-10681)
Cluster Expiration has been improved to only expire entries on the primary node to reduce the amount of concurrent expirations from multiple nodes in the cluster. Also the amount of concurrent expirations on a single node has been improved for better handling.
Additionally, expirations are not replicated cross site to reduce chattiness on the cross site link. Also to note that lifespan works fine without this and max-idle expiration does not work properly with cross site. So in this case the messages were providing no benefit.
We now have a proper sizeAsync method on the Cache interface. This is both for remote and embedded APIs. This method should be preferred over the current size due to not blocking the invoking thread as well as being able to retrieve the size as a long instead of a int.
It is now possible to configure JGroups stacks directly from the Infinispan configuration file. We use this ability to also allow easily creating multiple stacks (for easy cross-site configuration). The distribution comes with several pre-built JGroups stacks for cloud environments which you can quickly adapt for your configuration. Additionally you can extend existing JGroups configurations replacing individual protocols. This makes it easy, for example, to use a different discovery without worrying about all the other protocols.
Infinispan community documentation has been going through some big changes over the past year. The Infinispan 10 release marks the first major step towards adopting a modular structure that supports flexible content for specific use cases. On top of that we’ve also been putting lots of effort into transforming our documentation set to adhere to the principles of minimalism that put focus on user goals and delivering leaner, more concise content.
Our 10.0 release also incorporates work to organize content into three main types: task, concept, and reference. Mapping content to information types makes it easier to write and maintain content by removing worries about style, scope, and other complexities. Writers can separate documentation into logical units of information that can stand alone and then assemble topics into tutorials, how-to articles, explanations, and reference material.
You might also notice some changes to the documentation section of our site and updates to the index page for Infinispan 10 docs. Hopefully the new layout makes it easier to navigate and find the information you’re looking for.
We hope you find the improvements to the documentation helpful. As always, we’re keen to get your feedback and would appreciate. And if you feel like getting involved, see the Contributor’s Guide and start writing Infinispan docs today!
First steps to a new Reactive API. This is still a work in progress and the API will see major changes. We plan on making this API final and default in Infinispan 11. The new API includes a new API module and a new KeyValueStore Hot Rod client where search, continuous search and Key Value store methods are included
A new major release is also an opportunity to do some house-cleaning.
Deprecate GridFileSystem and org.infinispan.io stream implementations (ISPN-10298)
Deprecated Total Order transaction mode (ISPN-10259)
Deprecated Externalizer, AdvancedExternalizer and @SerializeWith (ISPN-10280)
AtomicMap implementations (ISPN-10230)
Deprecated org.infinispan.io classes (ISPN-10297)
Compatibility mode (ISPN-10370)
C3P0 and Hikari Connection Pools (ISPN-8087)
Delta and DeltaAware interfaces (ISPN-8071)
HotRod 1.x support (ISPN-9169)
Tree module (ISPN-10054)
Distributed Executor (ISPN-9784)
Tags: final release