Friday, 05 June 2020

Cross Site Replication improvements

Infinispan introduced Cross Site Replication functionality in version 5.2 and Infinispan 7 extended it to support state transfer. With an increase of popularity, Infinispan 11 brings two major improvements to Cross Site Replication. Let’s take a look at them.

Support for multiple site masters

Infinispan uses JGroups' RELAY2 protocol to enable inter-site communication. Each site has Site Masters: these are nodes with special roles, and are responsible for the communication between sites.

RELAY2 can use more than one Site Master per site allowing load balancing of the inter-site requests. The new algorithm is now able to take advantage of multiple Site Masters.

The attribute max_site_masters configures the number of Site Masters, and it defaults to 1. To take advantage of the new algorithm, increase the number of Site Masters in RELAY2 configuration by changing max_site_masters to a value higher than 1. A number greater than the number of nodes can be used, and it enables the Site Master role in all nodes.

<relay.RELAY2 site="<LOCAL_SITE_NAME>" max_site_masters="<PUT_VALUE_HERE>"/>

More information about RELAY2 is available in JGroups' Manual.

Conflict detection and resolution for Asynchronous Cross-Site Replication

Infinispan is able to detect conflicts in asynchronous mode by taking advantage of vector clocks. A conflict happens when 2 or more sites update the same key at the same time. Let’s look at an example between 2 sites (LON and NYC):

            LON       NYC

k1=(n/a)    0,0       0,0

k1=2        1,0  -->  1,0   k1=2

k1=3        1,1  <--  1,1   k1=3

k1=5        2,1       1,2   k1=8

                 -->  2,1 (conflict)
(conflict)  1,2  <--

k1=5        2,1  <->  2,1   k1=5
  • LON puts k1=2, with vector clock 1,0, and replicates it to NYC.

  • NYC puts k1=3, with vector clock 1,1, and replicates it to LON.

  • However if LON puts k1=5 (with vector 2,1), and NYC puts k1=8 in NYC (with vector clock 1,2) at the same time, Infinispan detects the conflict since none of the vector clocks are greater than the other.

Infinispan resolves the conflicts by comparing using the site names in lexicographical order. The site’s name lower in lexicographical order takes priority. In the example above, both LON and NYC end up with k1=5 since LON < NYC.

You can choose the priority by prepending a number to the site name. For example, if you want updates from NYC to take priority over LON updates, you can prepend a number to the site name, example: 1NYC, 2LON and so on.

For more information check the Infinispan Documentation.

Get it, Use it, Ask us!

Please download, report bugs, chat with us, ask questions on StackOverflow.

Posted by Pedro Ruivo on 2020-06-05
Tags: xsite cross site replication

Thursday, 04 June 2020

Secure server by default

The Infinispan server we introduced in 10.0 exposes a single port through which both Hot Rod and HTTP clients can connect.

While Infinispan has had very extensive security support since 7.0, the out-of-the-box default configuration did not enable authentication.

Infinispan 11.0’s server’s default configuration, instead, requires authentication. We have made several improvements to how authentication is configured and the tooling we provide to make the experience as smooth as possible.

Automatic authentication mechanism selection

Previously, when enabling authentication, you had to explicitly define which mechanisms had to be enabled per-protocol, with all of the peculiarities specific to each one (i.e. SASL for Hot Rod, HTTP for REST). Here is an example configuration with Infinispan 10.1 that enables DIGEST authentication:

<endpoints socket-binding="default" security-realm="default">
   <hotrod-connector name="hotrod">
      <authentication>
         <sasl mechanisms="DIGEST-MD5" server-name="infinispan"/>
      </authentication>
   </hotrod-connector>
   <rest-connector name="rest">
      <authentication mechanisms="DIGEST"/>
   </rest-connector>
</endpoints>

In Infinispan 11.0, the mechanisms are automatically selected based on the capabilities of the security realm. Using the following configuration:

<endpoints socket-binding="default" security-realm="default">
   <hotrod-connector name="hotrod" />
   <rest-connector name="rest"/>
</endpoints>

together with a properties security realm, will enable DIGEST for HTTP and SCRAM-*, DIGEST-* and CRAM-MD5 for Hot Rod. BASIC/PLAIN will only be implicitly enabled when the security realm has a TLS/SSL identity.

The following tables summarize the mapping between realm type and implicitly enabled mechanisms.

Table 1. SASL Authentication Mechanisms (Hot Rod)
Security Realm SASL Authentication Mechanism

Property Realms and LDAP Realms

SCRAM-*, DIGEST-*, CRAM-MD5

Token Realms

OAUTHBEARER

Trust Realms

EXTERNAL

Kerberos Identities

GSSAPI, GS2-KRB5

SSL/TLS Identities

PLAIN

Table 2. HTTP Authentication Mechanisms (REST)
Security Realm HTTP Authentication Mechanism

Property Realms and LDAP Realms

DIGEST

Token Realms

BEARER_TOKEN

Trust Realms

CLIENT_CERT

Kerberos Identities

SPNEGO

SSL/TLS Identities

BASIC

Automatic encryption

If the security realm has a TLS/SSL identity, the endpoint will automatically enable TLS for all protocols.

Encrypted properties security realm

The properties realm that is part of the default configuration has been greatly improved in Infinispan 11. The passwords are now stored in multiple encrypted formats in order to support the various DIGEST, SCRAM and PLAIN/BASIC mechanisms.

The user functionality that is now built into the CLI allows easy creation and manipulation of users, passwords and groups:

[disconnected]> user create --password=secret --groups=admin admin
[disconnected]> connect --username=admin --password=secret
[ispn-29934@cluster//containers/default]> user ls
[ "admin" ]
[ispn-29934@cluster//containers/default]> user describe admin
{ username: "admin", realm: "default", groups = [admin] }
[ispn-29934@cluster//containers/default]> user password admin
Set a password for the user: ******
Confirm the password for the user: ******
[ispn-29934@cluster//containers/default]>

Authorization: simplified

Authorization is another security aspect of Infinispan. In previous versions, setting up authorization was complicated by the need to add all the needed roles to each cache:

<infinispan>
   <cache-container name="default">
      <security>
         <authorization>
            <identity-role-mapper/>
            <role name="AdminRole" permissions="ALL"/>
            <role name="ReaderRole" permissions="READ"/>
            <role name="WriterRole" permissions="WRITE"/>
            <role name="SupervisorRole" permissions="READ WRITE EXEC BULK_READ"/>
         </authorization>
      </security>
      <distributed-cache name="secured">
         <security>
            <authorization roles="AdminRole ReaderRole WriterRole SupervisorRole"/>
         </security>
      </distributed-cache>
   </cache-container>
   ...
</infinispan>

With Infinispan 11 you can avoid specifying all the roles at the cache level: just enable authorization and all roles will implicitly apply. As you can see, the cache definition is much more concise:

<infinispan>
   <cache-container name="default">
      ...
      <distributed-cache name="secured">
         <security>
            <authorization/>
         </security>
      </distributed-cache>
   </cache-container>
   ...
</infinispan>

Conclusions

We hope that the changes we’ve made to improve security will make your servers more secure and easier to configure. For more information read the server security documentation.
Posted by Tristan Tarrant on 2020-06-04
Tags: server security

Saturday, 30 May 2020

Hot Rod per-cache configuration

Aside from being able to configure a Java Hot Rod client through a compact URI representation, Infinispan 11 brings some additional changes to remote cache configuration.

While remote caches did have some client-side configuration, this was never implemented cleanly, resorting to multiple overloaded variations on the getCache() method, for example to obtain a transactional cache, or enabling near-caching.

Infinispan 11 now allows specifying per-cache configuration both through the API and through the declarative properties file.

Let’s look at a few examples.

ConfigurationBuilder builder = new ConfigurationBuilder()
    .uri("hotrod://127.0.0.1");
    .remoteCache("closecache")
        .nearCacheMode(NearCacheMode.INVALIDATED)
        .nearCacheMaxEntries(10000)
    .remoteCache("txcache")
        .transactionMode(TransactionMode.NON_XA);
RemoteCacheManager manager = new RemoteCacheManager(builder.build());

In the above code snippet, we enable near-caching for the cache closecache and we enable NON_XA transactions on the cache txcache.

The equivalent hotrod-client.properties file:

infinispan.client.hotrod.uri=hotrod://127.0.0.1
infinispan.client.hotrod.cache.closecache.near_cache.mode=INVALIDATED
infinispan.client.hotrod.cache.closecache.near_cache.max_entries=10000
infinispan.client.hotrod.cache.txcache.transaction.transaction_mode=NON_XA

Automatic cache creation

A neat feature that has been implemented as part of per-cache configuration, is the ability to automatically create a cache on the server on first use, if it is missing, by supplying either an existing template or a full-blown configuration.

ConfigurationBuilder builder = new ConfigurationBuilder()
    .uri("hotrod://127.0.0.1");
    .remoteCache("mydistcache")
        .templateName("org.infinispan.DIST_SYNC");
RemoteCacheManager manager = new RemoteCacheManager(builder.build());
Cache<String, String> cache = manager.getCache("mydistcache");
...

The above example using a properties file would look like:

infinispan.client.hotrod.uri=hotrod://127.0.0.1
infinispan.client.hotrod.cache.mydistcache.template=org.infinispan.DIST_SYNC
Posted by Tristan Tarrant on 2020-05-30
Tags: hot rod configuration

Thursday, 28 May 2020

CLI enhancements

One of the key aspects of our new server architecture is the management API exposed through the single port.

While I’m sure there will be those of you who like to write scripts with plenty of curl/wget magic, and those who prefer the comfort of our new web console, the Infinispan CLI offers a powerful tool which combines the power of the former with the usability of the latter.

During the Infinispan 11 development cycle, the CLI has received numerous enhancements. Let’s look at some of them !

User management

When using the built-in properties-based security realm, you had to use the user-tool script to manage users, passwords and groups. That functionality has now been built into the CLI:

[disconnected]> user create --password=secret --groups=admin john
[disconnected]> connect --username=joe --password=secret
[infinispan-29934@cluster//containers/default]>

Remote logging configuration

You can now modify the server logging configuration from the CLI. For example, to enable TRACE logging for the org.jgroups category, use the following:

[infinispan-29934@cluster//containers/default]> logging set --level=TRACE org.jgroups
logging configuration changes are volatile, i.e. they will be lost when restarting a node.

Server report

To help with debugging issues, the server now implements an aggregate log which includes information such as a thread dump, memory configuration, open sockets/files, etc.

[bespin-29934@cluster//containers/default]> server report
Downloaded report 'infinispan-bespin-29934-20200522114559-report.tar.gz'
this feature currently only works on Linux/Unix systems.

Real CLI mode

It is now possible to invoke all CLI commands directly from the command-line, without having to resort to interactive mode or a batch. For example:

cli.sh user create --password=secret --groups=admin john

Native CLI

The CLI can now be built as a native executable, courtesy of GraalVM's native-image tool. We will soon be shipping binaries/images of this, so look out for an announcement.

Posted by Tristan Tarrant on 2020-05-28
Tags: cli server management administration logging

Tuesday, 26 May 2020

Hot Rod URI

Traditionally, the Java Hot Rod client has always been configured either via a properties file or through a programmatic builder API.

While both approaches offer a great amount of flexibility, they always felt a bit too complex for straightforward scenarios.

Starting with Infinispan 11 you will be able to specify the connection to an Infinispan Server via a URI, just like you’d connect to a database via a JDBC driver URL.

The Hot Rod URI allows you to specify the addresses of the server cluster, authentication parameters and any other property in a simple compact String format.

The URI specification is:

hotrod[s]://[username:password]@host[:port][,host[:port]…​][?property=value[&property=value…​]]

  • the protocol can be either hotrod (plain, unencrypted) or hotrods (TLS/SSL, encrypted)

  • if username and password are specified, they will be used to authenticate with the server

  • one or more addresses. If a port is not specified, the default 11222 will be used

  • zero or more properties, without the infinispan.client.hotrod prefix, through which you can configure all other aspects such as connection pooling, authentication mechanisms, near caching, etc.

Here are some examples:

hotrod://localhost

simple connection to a server running on localhost using the default port

hotrod://joe:secret@infinispan-host-1:11222,infinispan-host-2:11222

authenticated connection to infinispan-host-1 and infinispan-host-2 with explicit port

hotrods://infinispan-host-1?socket_timeout=1000&connect_timeout=2000

TLS/SSL connection to infinispan-host-1 using the default port and with custom connection and socket timeouts

The URI format can also be used as a starting point in your usual properties file or API configuration and further enriched using the traditional methods:

infinispan.client.hotrod.uri=hotrod://joe:secret@infinispan-host-1:11222,infinispan-host-2:11222
infinispan.client.hotrod.connect_timeout=100
infinispan.client.hotrod.socket_timeout=100
infinispan.client.hotrod.tcp_keep_alive=true
ConfigurationBuilder builder = new ConfigurationBuilder()
    .uri("hotrod://joe:secret@infinispan-host-1:11222,infinispan-host-2:11222")
    .socketTimeout(100)
    .connectionTimeout(100)
    tcpKeepAlive(true);

We hope this makes configuration simpler.

Happy coding!

Posted by Tristan Tarrant on 2020-05-26
Tags: documentation

News

Tags

JUGs alpha as7 asymmetric clusters asynchronous beta c++ cdi chat clustering community conference configuration console data grids data-as-a-service database devoxx distributed executors docker event functional grouping and aggregation hotrod infinispan java 8 jboss cache jcache jclouds jcp jdg jpa judcon kubernetes listeners meetup minor release off-heap openshift performance presentations product protostream radargun radegast recruit release release 8.2 9.0 final release candidate remote query replication queue rest query security spring streams transactions vert.x workshop 8.1.0 API DSL Hibernate-Search Ickle Infinispan Query JP-QL JSON JUGs JavaOne LGPL License NoSQL Open Source Protobuf SCM administration affinity algorithms alpha amazon annotations announcement archetype archetypes as5 as7 asl2 asynchronous atomic maps atomic objects availability aws beer benchmark benchmarks berkeleydb beta beta release blogger book breizh camp buddy replication bugfix c# c++ c3p0 cache benchmark framework cache store cache stores cachestore cassandra cdi cep certification cli cloud storage clustered cache configuration clustered counters clustered locks codemotion codename colocation command line interface community comparison compose concurrency conference conferences configuration console counter cpp-client cpu creative cross site replication csharp custom commands daas data container data entry data grids data structures data-as-a-service deadlock detection demo deployment dev-preview devnation devoxx distributed executors distributed queries distribution docker documentation domain mode dotnet-client dzone refcard ec2 ehcache embedded query equivalence event eviction example externalizers failover faq final fine grained flags flink full-text functional future garbage collection geecon getAll gigaspaces git github gke google graalvm greach conf gsoc hackergarten hadoop hbase health hibernate hibernate ogm hibernate search hot rod hotrod hql http/2 ide index indexing india infinispan infinispan 8 infoq internationalization interoperability interview introduction iteration javascript jboss as 5 jboss asylum jboss cache jbossworld jbug jcache jclouds jcp jdbc jdg jgroups jopr jpa js-client jsr 107 jsr 347 jta judcon kafka kubernetes lambda language leveldb license listeners loader local mode lock striping locking logging lucene mac management map reduce marshalling maven memcached memory migration minikube minishift minor release modules mongodb monitoring multi-tenancy nashorn native near caching netty node.js nodejs nosqlunit off-heap openshift operator oracle osgi overhead paas paid support partition handling partitioning performance persistence podcast presentations protostream public speaking push api putAll python quarkus query quick start radargun radegast react reactive red hat redis rehashing releaase release release candidate remote remote events remote query replication rest rest query roadmap rocksdb ruby s3 scattered cache scripting second level cache provider security segmented server shell site snowcamp spark split brain spring spring boot spring-session stable standards state transfer statistics storage store store by reference store by value streams substratevm synchronization syntax highlighting testing tomcat transactions uneven load user groups user guide vagrant versioning vert.x video videos virtual nodes vote voxxed voxxed days milano wallpaper websocket websockets wildfly workshop xsd xsite yarn zulip

back to top