Monday, 24 February 2020

Infinispan Operator 1.1.1 is out!

We’re pleased to announce version 1.1.1 of the Infinispan Operator for Kubernetes and OpenShift.

This release has focused on fixing bugs and improving robustness, mainly related to the following:

  • improving reconcile flow stability

  • reducing Operator CPU load

  • cleaning up logs

Our community documentation on https://infinispan.org/documentation has also been updated and improved. You can find some of the changes at:

Automatic Upgrades

If you installed the Infinispan Operator on Red Hat OpenShift with the Automatic Approval upgrade policy, your cluster should already be running the latest versions (Infinispan Operator 1.1.1 with Infinispan 10.1.2.Final).

We would like to hear opinions from you about the automated upgrade process, so get in touch if you have any issues or want to give any feedback.

Get it, Use it, Ask us!

Try the simple tutorial for the Operator, which has been updated for this version.

You can report bugs, chat with us, ask questions on StackOverflow.

Finally, a detailed list of issues and features for this version can be found here.

Posted by Vittorio Rigamonti on 2020-02-24
Tags: release operator

Thursday, 20 February 2020

Infinispan Server configuration

The new Infinispan Server introduced in version 10.0 is quite different from the WildFly-based one we had up to 9.x. One of the big differences is that the new server’s configuration is just an extension of the embedded configuration.

The XML snippet below shows the configuration used by the server "out-of-the-box":

<infinispan
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="urn:infinispan:config:10.1 https://infinispan.org/schemas/infinispan-config-10.1.xsd
                              urn:infinispan:server:10.1 https://infinispan.org/schemas/infinispan-server-10.1.xsd"
          xmlns="urn:infinispan:config:10.1"
          xmlns:server="urn:infinispan:server:10.1">
  
     <cache-container name="default" statistics="true"> (1)
        <transport cluster="${infinispan.cluster.name}" stack="${infinispan.cluster.stack:tcp}" node-name="${infinispan.node.name:}"/>
     </cache-container>
  
     <server xmlns="urn:infinispan:server:10.1"> (2)
        <interfaces>
           <interface name="public"> (3)
              <inet-address value="${infinispan.bind.address:127.0.0.1}"/>
           </interface>
        </interfaces>
  
        <socket-bindings default-interface="public" port-offset="${infinispan.socket.binding.port-offset:0}"> (4)
           <socket-binding name="default" port="${infinispan.bind.port:11222}"/>
           <socket-binding name="memcached" port="11221"/>
        </socket-bindings>
  
        <security> (5)
           <security-realms>
              <security-realm name="default">
                 <!-- Uncomment to enable TLS on the realm -->
                 <!-- server-identities>
                    <ssl>
                       <keystore path="application.keystore" relative-to="infinispan.server.config.path"
                                 keystore-password="password" alias="server" key-password="password"
                                 generate-self-signed-certificate-host="localhost"/>
                    </ssl>
                 </server-identities-->
                 <properties-realm groups-attribute="Roles">
                    <user-properties path="users.properties" relative-to="infinispan.server.config.path" plain-text="true"/>
                    <group-properties path="groups.properties" relative-to="infinispan.server.config.path" />
                 </properties-realm>
              </security-realm>
           </security-realms>
        </security>
  
        <endpoints socket-binding="default" security-realm="default"> (6)
           <hotrod-connector name="hotrod"/>
           <rest-connector name="rest"/>
        </endpoints>
     </server>
  </infinispan>

Let’s have a look at the various elements, describing their purposes:

1 The cache-container element is a standard Infinispan cache manager configuration like you’d use in embedded deployments. You can decide to leave it empty and create any caches at runtime using the CLI, Console or Hot Rod and RESTful APIs, or statically predefine your caches here.
2 The server element holds the server-specific configuration which includes network, security and protocols.
3 The interface element declares named interfaces which are associated with specific addresses/interfaces. The default public interface will use the loopback address 127.0.0.1 unless overridden with the -b switch or the infinispan.bind.address system property. Refer to server interfaces documentation for a detailed list of all possible ways of selecting an address.
4 The socket-bindings element associates addresses and ports with unique names you can use later on configuring the protocol endpoints. For convenience, a port offset can be added to all port numbers to ease starting multiple servers on the same host. Use the -o switch or the infinispan.socket.binding.port-offset system property to change the offset.
5 The security element configures the server’s realms and identities. We will ignore this for now as this deserves its own dedicated blog post in the near future.
6 The endpoints element configures the various protocol servers. Unless overridden, all sub protocols are aggregated into a single-port endpoint which, as its name suggests, listens on a single port and automatically detects the incoming protocol, delegating to the appropriate handler.

The rest-connector has a special role in the new server, since it now also handles administrative tasks. It is therefore required if you want to use the CLI or the Console. You may wish to have the protocols listen on different ports, as outlined in the configuration below:

<infinispan
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="urn:infinispan:config:10.1 https://infinispan.org/schemas/infinispan-config-10.1.xsd
                              urn:infinispan:server:10.1 https://infinispan.org/schemas/infinispan-server-10.1.xsd"
          xmlns="urn:infinispan:config:10.1"
          xmlns:server="urn:infinispan:server:10.1">
  
     <cache-container name="default" statistics="true">
        <transport cluster="${infinispan.cluster.name}" stack="${infinispan.cluster.stack:tcp}" node-name="${infinispan.node.name:}"/>
     </cache-container>
  
     <server xmlns="urn:infinispan:server:10.1">
        <interfaces>
           <interface name="public">
              <match-interface value="eth0"/>
           </interface>
           <interface name="admin">
              <loopback/>
           </interface>
        </interfaces>
  
        <socket-bindings default-interface="public" port-offset="${infinispan.socket.binding.port-offset:0}">
           <socket-binding name="public" port="${infinispan.bind.port:11222}"/>
           <socket-binding name="admin" interface="admin" port="${infinispan.bind.port:11222}"/>
        </socket-bindings>
  
        <security>
           <security-realms>
              <security-realm name="default">
                 <properties-realm groups-attribute="Roles">
                    <user-properties path="users.properties" relative-to="infinispan.server.config.path" plain-text="true"/>
                    <group-properties path="groups.properties" relative-to="infinispan.server.config.path" />
                 </properties-realm>
              </security-realm>
           </security-realms>
        </security>
  
        <endpoints socket-binding="admin" security-realm="default">
           <hotrod-connector name="hotrod" socket-binding="public"/>
           <rest-connector name="rest"/>
        </endpoints>
     </server>
  </infinispan>

This creates two socket bindings, one named public bound to the eth0 interface and one named admin bound to the loopback interface. The server will therefore listen for Hot Rod traffic only on the public network and for HTTP/REST traffic on the admin network.

For more details on how to configure Infinispan Server, refer to our documentation.

In the next blog post we will have an in-depth look at security.

Posted by Tristan Tarrant on 2020-02-20
Tags: server

Monday, 10 February 2020

Infinispan Spring Boot Starter released with Spring Boot 2.2.4.RELEASE

Dear Infinispan and Spring Boot users,

We are pleased to announce the release of Infinispan Spring Boot  2.1.8.Final and 2.2.0.Final.

2.1.8.Final uses Infinispan 9.4.17.Final and Spring Boot 2.2.2.RELEASE

2.2.0.Final uses Infinispan 10.1.1.Final and Spring Boot 2.2.2.RELEASE 

Configuring Marshalling with Infinispan 10.x

Infinispan 10.x servers have some significant changes to marshalling that impact Spring Boot users.

The default Marshaller for Infinispan 10.x is ProtoStream, which uses Protocol Buffers to provide extensible, language and platform neutral serialization.

Unfortunately ProtoStream does not currently work with Infinispan Spring Cache and Session support. As a result, Spring users in Remote Client/Server Mode must use the Java Serialization Marshaller and add classes to a Java serialization whitelist.

Add the following configuration properties:

infinispan.remote.marshaller=org.infinispan.commons.marshall.JavaSerializationMarshaller infinispan.remote.java-serial-whitelist=org.infinispan.tutorial.simple.spring.remote.*

The infinispan.remote.java-serial-whitelist property specifies the classes, or packages, that Java serialization can marshall. Separate multiple class names with a comma (,).

Note that, in previous versions, JBoss Marshaller was the default for Infinispan. Spring users can also use JBoss Marshalling, but it is deprecated as of Infinispan 10.x.

Get it, Use it, Ask us!

You can find these releases in the maven central repository.

Please report any issues in our issue tracker and join the conversation in our Zulip Chat to shape up our next release.

Enjoy,

The Infinispan Team

Posted by Katia Aresti on 2020-02-10
Tags: release spring boot spring

Friday, 24 January 2020

Infinispan Operator 1.1.0 is out!

We’re pleased to announce version 1.1.0 of the Infinispan Operator for Kubernetes and OpenShift.

This release includes a bunch of very exciting features! Let’s dig into them:

Full Lifecycle

Infinispan Operator 1.1.0 is rated at the Full Lifecycle capacity level, which means the Operator now provides advanced cluster management capabilities and functionality to handle demanding workloads.

One of the key new features in this release is graceful shutdown, which lets you bring clusters down safely to avoid data loss.

During cluster shutdown, caches can passivate in-memory entries to persistent storage along with the internal Infinispan state that maps which nodes own which entries. When you bring Infinispan clusters back, all your data is restored to memory.

Check out the Graceful Shutdown docs for more information.

Graceful shutdown also enables the Infinispan Operator to perform reliable upgrades.

When a new version of the Infinispan Operator starts, it checks for running Infinispan clusters that were created by an older Operator version.

If the Operator detects a cluster that requires upgrade, it invokes a graceful shutdown on the cluster and then brings it back with the new Infinispan version.

You can perform upgrade manually or automatically with the Operator Lifecycle Manager on OpenShift.

Note that Operators installed via the OperatorHub on OpenShift Container Platform are managed by the Operator Lifecyle Manager.

Cache vs DataGrid

This version of the Infinispan Operator delivers Cache and DataGrid services.

By default the Operator starts Infinispan clusters as Cache services, which provides a quick way to set up in-memory caching that stores data off-heap and keeps single copies of data in the cluster.

DataGrid services, on the other hand, are suited to more advanced use cases where you control and define the configuration that you need.

Cross-Site Replication

The Infinispan Operator simplifies cross-site replication set up with DataGrid services so you can back data up between separate Kubernetes or OpenShift clusters.

All you need to do is specify which type of external Kubernetes service to expose, the list of all backup locations, access secrets, and the local site name.

Find out more at: Cross-Site Replication

Automatic TLS configuration

If you’re running on Openshift and have a service that serves certificates, the Operator automatically asks for certificates sets up TLS for your endpoint connections. Encrypted by default with zero effort!

Get it, Use it, Ask us!

Try the simple tutorial for the Operator, which has been updated for this version. The tutorial shows how to install the Operator manually, but it can also be installed via the Operator Hub on OpenShift.

You can report bugs, chat with us, ask questions on StackOverflow.

Finally, a detailed list of issues and features for this version can be found here.

Posted by Galder Zamarreño on 2020-01-24
Tags: release operator

Monday, 23 December 2019

Infinispan 10.1.0.Final

Hi there,

we finish 2019 in style with the Final release of Infinispan 10.1, codenamed "Turia".

Server console

The highlight of this release is the new server console which is now based on Patterfly 4 and React.js. We will soon have a blog post detailing the work that has been done and our future plans. In the meantime, here are a few screenshots:

Welcome Page
Console Caches
Console Cache Stats

Security

Many changes related to security have happened since 10.0:

  • Native SSL/TLS provided by WildFly OpenSSL. The server only ships with native libraries for Linux x86_64, but you can download natives for other platforms and architectures

  • Improved usability of the Hot Rod client configuration with better defaults

  • Full support for authorization for admin operations via the RESTful endpoint

  • Console authentication support

  • Kerberos authentication for both Hot Rod (GSSAPI, GS2) and HTTP/Rest (SPNEGO)

  • Improved LDAP realm configuration with connection tuning and attribute references

  • Rewritten client/server security documentation including examples on how to create certificate chains, connecting to various LDAP directories and KeyCloak, etc.

Server

  • A command-line switch to specify an alternate logging configuration file

  • Query and indexing operations/stats are now exposed over the RESTful API

  • Tasks and Scripting support

  • Support for binding the endpoints to 0.0.0.0 / ::0 (aka INADDR_ANY)

Non-blocking

More work has landed on the quest to completely remove blocking calls from our internals. The following have been made non-blocking:

  • State transfer

  • The size operation

  • Cache stream ops with primitive types

Additionally caches now have a reactive Publisher which is intended as a fully non-blocking approach to distributed operations.

Query

  • The query components have been reorganized so that they are more modular.

Monitoring

  • The introduction of histogram and timer metrics.

  • The /metrics endpoint now includes base and vendor microprofile metrics

Stores

  • The REST cache store has been updated to use the v2 RESTful API.

Removals and deprecations

  • The old RESTful API (v1) has been partially reinstated until 11.0. Bulk ops are disabled.

  • The Infinispan Lucene Directory has been deprecated.

  • The memcached protocol server has been deprecated. If you were relying on this, come and talk to us about working on a binary protocol implementation.

Bug fixes, clean-ups and documentation

Over 160 issues fixed including a lot of documentation updates. See the full list of changes and fixes

Get it, Use it, Ask us!

Please download, report bugs, chat with us, ask questions on StackOverflow.

Posted by Tristan Tarrant on 2019-12-23
Tags: release
back to top