Monday, 09 December 2019

Infinispan 10.1.0.CR1

Dear Infinispan community,

as we are closing in on 10.1, we have been working on a lot of polishing and bugfixing.

Server

  • The new console has received a lot of improvements,

  • A new welcome page

  • A command-line switch to specify an alternate logging configuration file

Query

The query components have been reorganized so that they are more modular.

Monitoring

  • The introduction of histogram and timer metrics.

Stores

  • The REST cache store has been updated to use the v2 RESTful API.

Removals and deprecations

  • The old RESTful API (v1) has been removed

  • The Infinispan Lucene Directory has been deprecated.

  • The memcached protocol server has been deprecated. If you were relying on this, come and talk to us about working on a binary protocol implementation.

Bug fixes, clean-ups and documentation

Over 40 issues fixed including a lot of documentation updates. See the full list of changes and fixes

Get it, Use it, Ask us!

Please download, report bugs, chat with us, ask questions on StackOverflow.

Infinispan 10.1.0.Final is scheduled for December the 20th.

Posted by Tristan Tarrant on 2019-12-09
Tags: release candidate release

Monday, 02 December 2019

Infinispan's new image

Infinispan 10 introduced a new server, which does not utilise the same launch commands and configuration as the legacy 9.4 wildfly based server. Therefore, we decided that this was an excellent opportunity to rewrite our container image from scratch to better suite the capabilities of the new server and to provide all the functionality required by the Infinispan Operator.

This post focuses on the server image’s architecture. Future blog posts will focus on more advanced configurations, as well as example usage and deployment scenarios such as deploying a cluster using Kubernetes StatefulSets.

Show me the code!

The source code for the Infinispan image has a new home. The image can be found at https://github.com/infinispan/infinispan-images. Currently this repository only contains the server image, however our intention is for this to also be the home for all future Infinispan related images.

Where’s the Dockerfile?

The most noticable change when looking at the repository is that there is no Dockerfile in the source tree. This is because we decided to utilise the open-source tool CEKit to build our images. CEKit is an image creation tool that allows container images to be created using multiple build engines (e.g. docker, Buildah, Podman) with a single configuration. Installation instructions can be found here, but the basic command to create a Docker based image is as follows.

cekit build docker

CEKit leverages .yaml files for all configuration, opposed to a Dockerfile, as this allows for build time overriding of image properties. For example, with CEKit it’s possible to override the server artifact version without modifying any files, instead the following is passed as a build parameter.

cekit build --overrides '{"artifacts": [{"name": "server.zip", "path": "infinispan-server-10.0.0-SNAPSHOT.zip"}]}' docker

More detailed instructions about how to build the server image from source can be found in the image’s documentation.

Ok so where can I get a pre-built image?

Previously the Infinispan images were deployed exclusively under the jboss namespace at jboss/infinispan-server, however this repository has now been deprecated and will be removed eventually.

Instead, all Infinispan images will now be released under the infinispan namespace and are hosted at both Quay.io and Docker Hub, as quay.io/infinispan/server and infinispan/server.

Getting Started

To get started with infinispan server on your local machine simply execute:

docker run -p 11222:11222 infinispan/server

By default the image has authentication enabled on all exposed endpoints. When executing the above command the image automatically generates a username/password combo, prints the values to stdout and then starts the Infinispan server with the authenticated Hotrod and Rest endpoints exposed on port 11222. Therefore, it’s necessary to utilise the printed credentials when attempting to access the exposed endpoints via clients.

It’s also possible to provide a username/password combination via environment variables like so:

docker run -p 11222:11222 -e USER="Titus Bramble" -e PASS="Shambles" infinispan/server

Connecting via Hotrod

Using the credentials passed in the command above, it is now possible to connect via the HotRod client using the following hotrod-client.properties file. Note, that the following SASL properties must be configured on your client, with the username and password properties changed as required, otherwise the connection will fail:

infinispan.remote.auth-realm=default
  infinispan.remote.auth-server-name=infinispan
  infinispan.remote.auth-username=Titus Bramble
  infinispan.remote.auth-password=Shambles

Connecting via REST

The REST endpoint is configured to utilise the DIGEST protocol, therefore it’s necessary for the HTTP client to authenticate requests. For example, the name of all caches can be retrieved via the following curl command:

 curl --digest -u "Titus Bramble:Shambles" http://localhost:11222/rest/v2/cache

Further Reading

For more detailed information abou how to use the image, please consult the official documentation.

In the next blog post we will look at how the server can be configured for more advanced use-cases by supplying configuration and identity .yaml files.

Get it, Use it, Ask us!

Please download, report bugs, chat with us, ask questions on StackOverflow.

Posted by Ryan Emerson on 2019-12-02
Tags: docker

Monday, 25 November 2019

Infinispan Operator 1.0.1

Dear Infinispan community,

we know you are happy with the new shining 10.0.0 Infinispan release, but if you are among those who are missing a new operator version for safely running your Infinispan Chupachabra in the clound, this post is for you!

Versioning and channels

This is our first blog post about 1.0.x operator series (yeah, sorry 1.0.0 we forgot about you) and as you can notice there’s no Alpha, Beta or CR label at the end of the release tag. This is because OperatorHub and Openshift Catalog only allow numerical version like Maj.Min.Mic and instead of labels we now use the channel to indicate the stability of a release. We have 2 live channels at the moment for the Infinispan operator: stable and dev-preview. Current stable is 0.3.2 which is for the 9.x Infinispan cluster and current dev-preview is 1.0.1 which works with 10.x clusters.

New features

  • New Infinispan image configuration: we cleaned up the image configuration process: instead of rely on a large set of env variables, now the operator configures the Infinispan image via a single .yaml file.

  • Container configurability: CR .yaml file lets you configure memory and CPU (and also extras Java opts) assigned to the container;

  • Encryption: TLS can be setup providing TLS certificates or using platform service as the Openshift seriving certs service (TLS will be on by default in the next release);

  • We now have some good docs: https://infinispan.org/infinispan-operator/master/operator.html;

  • Project README has been also improved: https://github.com/infinispan/infinispan-operator/blob/1.0.1/README.md;

Get it

The Infinispan Operator 1.0.1 works on Kind/Kubernetes 1.16 (CI) and Openshift 3.11, 4.x (developed on). You can install it:

And remember: it’s a dev-preview release, you can have a lot of fun with it!

Contribute

As usual source code is open at: https://github.com/infinispan/infinispan-operator. You can see what’s going on, comment the code or the new pull requests, ask for new features and also develop them!

Thanks for following us, Infinispan

Posted by Vittorio Rigamonti on 2019-11-25
Tags: dev-preview release

Monday, 18 November 2019

Infinispan 10.1.0.Beta1

Dear Infinispan community,

Quick on the heels of Infinispan 10.0 here comes the first Beta or 10.1.

Server

This release closes the gap between the legacy server and the new server we introduced in 10.0. In particular:

  • The reworked console (which will be described in detail in an upcoming series of blog posts)

  • Kerberos authentication for both Hot Rod (GSSAPI, GS2) and HTTP/Rest (SPNEGO)

  • Query and indexing operations/stats are now exposed over the RESTful API

  • Tasks and Scripting support

Non-blocking

More work has landed on the quest to completely remove blocking calls from our internals. The following have been made non-blocking:

  • the size operation

  • cache stream ops with primitive types

Additionally caches now have a reactive Publisher which is intended as a fully non-blocking approach to distributed operations.

Components upgrades

  • rxjava 2.2.12

  • smallrye metrics 2.3.0

  • microprofile metrics 2.2

Bug fixes, clean-ups and documentation

Over 40 bug fixes. See the full list of changes and fixes

Get it, Use it, Ask us!

Please download, report bugs, chat with us, ask questions on StackOverflow.

Infinispan 10.1.0.CR1 is scheduled for December the 7th.

Posted by Tristan Tarrant on 2019-11-18
Tags: beta release

Monday, 11 November 2019

Infinispan's new server

One of the biggest changes in Infinispan 10 is the new server, which replaces the WildFly-based server we had been using up until 9.x.

This is the first of a series of blog posts which will describe the new server, how to use it, how to configure it and how to deploy it in your environment. More specifically, this post will focus mostly on the reasons behind the change, while the next ones will be of a more practical nature.

A history of servers

Infinispan has had a server implementing the Hot Rod protocol since 4.1. Originally it was just a main class which bootstrapped the server protocol. It was configured via the same configuration file used by the embedded library, it had no security and only handled Hot Rod.

Over time both a RESTful HTTP and a Memcached protocol were added and could be bootstrapped in the same way.

While the server bootstrap code was trivial, it was not going to scale to support all the things we needed (security, management, provisioning, etc). We therefore decided to build our next server on top of the very robust foundation provided by WildFly (aka, the application server previously known as JBoss AS 7), which made its first appearance in 5.3.

Integration with WildFly’s management model was not trivial but it gave us all of the things we were looking for and more, such as deployments, data sources, CLI, console, etc. It also came with a way to provision multiple nodes and manage them from a central controller, i.e. domain mode. All of these facilities however came at the cost of a lot of extra integration code to maintain as well as a larger footprint, both in terms of memory and storage use, caused by a number of subsystems which we had to carry along, even though we didn’t use them directly.

A different server

Fast-forward several versions, and the computing landscape has changed considerably: services are containerized, they are provisioned and managed via advanced orchestration tools like Kubernetes or via configuration management tools like Ansible and the model we were using was overlapping (if not altogether clashing) with the container model, where global configuration is immutable and managed externally.

With the above in mind, we have therefore decided to reboot our server implementation. During planning and development it has been known affectionately as ServerNG, but nowadays it is just the Infinispan Server. The WildFly-based server is now the legacy server.

Configuration

The new server separates global configuration (clustering, endpoints, security) from the configuration of dynamic resources like caches, counters, etc. This means that global configuration can be made immutable while the ,mutable configuration is stored separately in the global persistence location. In a containerized environment you will place the persistence location onto a volume that will survive restarts.

A quick two-node cluster with Docker

Starting a two-node cluster using the latest version of the server image is easy:

$ docker run --name ispn1 --hostname ispn1 -e USER=admin -e PASS=admin -p 11222:11222 infinispan/server
  $ docker run --name ispn2 --hostname ispn2 -e USER=admin -e PASS=admin -p 11322:11222 infinispan/server

The two nodes will discover each other, as can be seen from the logs:

15:58:21,201 INFO  [org.infinispan.CLUSTER] (jgroups-5,ispn-1-42736) ISPN000094: Received new cluster view for channel infinispan: [ispn-1-42736|1] (2) [ispn-1-42736, ispn-2-51789]
  15:58:21,206 INFO  [org.infinispan.CLUSTER] (jgroups-5,ispn-1-42736) ISPN100000: Node ispn-2-51789 joined the cluster

Next we will connect to the cluster using the CLI:

$ docker run -it --rm infinispan/server /opt/infinispan/bin/cli.sh
  [disconnected]> connect http://172.17.0.2:11222
  Username: admin
  Password: *****
  [ispn-1-42736@infinispan//containers/DefaultCacheManager]>

Next we will create a distributed cache and select it for future operations:

[ispn-1-42736@infinispan//containers/DefaultCacheManager]> create cache --template=org.infinispan.DIST_SYNC distcache
  [ispn-1-42736@infinispan//containers/DefaultCacheManager]> cache distcache
  [ispn-1-42736@infinispan//containers/DefaultCacheManager/caches/distcache]>

Let’s insert some data now:

[ispn-1-42736@infinispan//containers/DefaultCacheManager/caches/distcache]> put k1 v1
  [ispn-1-42736@infinispan//containers/DefaultCacheManager/caches/distcache]> put k2 v2
  [ispn-1-42736@infinispan//containers/DefaultCacheManager/caches/distcache]> ls
  k2
  k1
  [ispn-1-42736@infinispan//containers/DefaultCacheManager/caches/distcache]> get k1
  v1

Now let’s use the RESTful API to fetch one of the entries:

$ curl --digest -u admin:admin http://localhost:11222/rest/v2/caches/distcache/k2
  v2

Since we didn’t map persistent volumes to our containers, both the cache and its contents will be lost when we terminate the containers.

In the next blog post we will look at configuration and persistence in more depth.

Posted by Tristan Tarrant on 2019-11-11
Tags: server
back to top