## 1. Infinispan Caches

Infinispan caches provide flexible, in-memory data stores that you can configure to suit use cases such as:

• boosting application performance with high-speed local caches.

• optimizing databases by decreasing the volume of write operations.

• providing resiliency and durability for consistent data across clusters.

### 1.1. Cache Interface

`Cache<K,​V>` is the central interface for Infinispan and extends `java.util.concurrent.ConcurrentMap`.

Cache entries are highly concurrent data structures in `key:value` format that support a wide and configurable range of data types, from simple strings to much more complex objects.

### 1.2. Cache Managers

Infinispan provides a `CacheManager` interface that lets you create, modify, and manage local or clustered caches. Cache Managers are the starting point for using Infinispan caches.

There are two `CacheManager` implementations:

`EmbeddedCacheManager`

Entry point for caches when running Infinispan inside the same Java Virtual Machine (JVM) as the client application, which is also known as Library Mode.

`RemoteCacheManager`

Entry point for caches when running Infinispan as a remote server in its own JVM. When it starts running, `RemoteCacheManager` establishes a persistent TCP connection to a Hot Rod endpoint on a Infinispan server.

 Both embedded and remote `CacheManager` implementations share some methods and properties. However, semantic differences do exist between `EmbeddedCacheManager` and `RemoteCacheManager`.

### 1.3. Cache Containers

Cache containers declare one or more local or clustered caches that a Cache Manager controls.

Cache container declaration
``````<cache-container name="clustered" default-cache="default">
...
</cache-container>``````

### 1.4. Cache Modes

 Infinispan Cache Managers can create and control multiple caches that use different modes. For example, you can use the same Cache Manager for local caches, distributes caches, and caches with invalidation mode.
Local Caches

Infinispan runs as a single node and never replicates read or write operations on cache entries.

Clustered Caches

Infinispan instances running on the same network can automatically discover each other and form clusters to handle cache operations.

Invalidation Mode

Rather than replicating cache entries across the cluser, Infinispan evicts stale data from all nodes whenever operations modify entries in the cache. Infinispan performs local read operations only.

Replicated Caches

Infinispan replicates each cache entry on all nodes and performs local read operations only.

Distributed Caches

Infinispan stores cache entries across a subset of nodes and assigns entries to fixed owner nodes. Infinispan requests read operations from owner nodes to ensure it returns the correct value.

Scattered Caches

Infinispan stores cache entries across a subset of nodes. By default Infinispan assigns a primary owner and a backup owner to each cache entry in scattered caches. Infinispan assigns primary owners in the same way as with distributed caches, while backup owners are always the nodes that initiate the write operations. Infinispan requests read operations from at least one owner node to ensure it returns the correct value.

#### 1.4.1. Cache Mode Comparison

The cache mode that you should choose depends on the qualities and guarantees you need for your data.

The following table summarizes the primary differences between cache modes:

Simple Local Invalidation Replicated Distributed Scattered

Clustered

No

No

Yes

Yes

Yes

Yes

Highest
(local)

High
(local)

High
(local)

High
(local)

Medium
(owners)

Medium
(primary)

Write performance

Highest
(local)

High
(local)

Low
(all nodes, no data)

Lowest
(all nodes)

Medium
(owner nodes)

Higher
(single RPC)

Capacity

Single node

Single node

Single node

Smallest node

Cluster
(sum_(i=1)^"nodes""node_capacity")/"owners"

Cluster
(sum_(i=1)^"nodes""node_capacity")/"2"

Availability

Single node

Single node

Single node

All nodes

Owner nodes

Owner nodes

Features

No TX, persistence, indexing

All

All

All

All

No TX

## 2. Local Caches

While Infinispan is particularly interesting in clustered mode, it also offers a very capable local mode. In this mode, it acts as a simple, in-memory data cache similar to a `ConcurrentHashMap`.

But why would one use a local cache rather than a map? Caches offer a lot of features over and above a simple map, including write-through and write-behind to a persistent store, eviction of entries to prevent running out of memory, and expiration.

Infinispan’s `Cache` interface extends JDK’s `ConcurrentMap` — making migration from a map to Infinispan trivial.

Infinispan caches also support transactions, either integrating with an existing transaction manager or running a separate one. Local caches transactions have two choices:

1. When to lock? Pessimistic locking locks keys on a write operation or when the user calls `AdvancedCache.lock(keys)` explicitly. Optimistic locking only locks keys during the transaction commit, and instead it throws a `WriteSkewCheckException` at commit time, if another transaction modified the same keys after the current transaction read them.

### 2.1. Simple Caches

Traditional local caches use the same architecture as clustered caches, i.e. they use the interceptor stack. That way a lot of the implementation can be reused. However, if the advanced features are not needed and performance is more important, the interceptor stack can be stripped away and simple cache can be used.

So, which features are stripped away? From the configuration perspective, simple cache does not support:

• transactions and invocation batching

• persistence (cache stores and loaders)

• custom interceptors (there’s no interceptor stack!)

• indexing

• transcoding

• store as binary (which is hardly useful for local caches)

From the API perspective these features throw an exception:

• Distributed Executors Framework

So, what’s left?

• basic map-like API

• cache listeners (local ones)

• expiration

• eviction

• security

• JMX access

• statistics (though for max performance it is recommended to switch this off using statistics-available=false)

Declarative configuration
``````<local-cache name="mySimpleCache" simple-cache="true">
<!-- expiration, eviction, security... -->
</local-cache>``````
Programmatic configuration
``````CacheManager cm = getCacheManager();
ConfigurationBuilder builder = new ConfigurationBuilder().simpleCache(true);
cm.defineConfiguration("mySimpleCache", builder.build());
Cache cache = cm.getCache("mySimpleCache");``````

Simple cache checks against features it does not support, if you configure it to use e.g. transactions, configuration validation will throw an exception.

## 3. Clustered Caches

Clustered caches store data across multiple Infinispan nodes using JGroups technology as the transport layer to pass data across the network.

### 3.1. Invalidation Mode

You can use Infinispan in invalidation mode to optimize systems that perform high volumes of read operations. A good example is to use invalidation to prevent lots of database writes when state changes occur.

This cache mode only makes sense if you have another, permanent store for your data such as a database and are only using Infinispan as an optimization in a read-heavy system, to prevent hitting the database for every read. If a cache is configured for invalidation, every time data is changed in a cache, other caches in the cluster receive a message informing them that their data is now stale and should be removed from memory and from any local store.

Figure 1. Invalidation mode

Sometimes the application reads a value from the external store and wants to write it to the local cache, without removing it from the other nodes. To do this, it must call `Cache.putForExternalRead(key, value)` instead of `Cache.put(key, value)`.

Invalidation mode can be used with a shared cache store. A write operation will both update the shared store, and it would remove the stale values from the other nodes' memory. The benefit of this is twofold: network traffic is minimized as invalidation messages are very small compared to replicating the entire value, and also other caches in the cluster look up modified data in a lazy manner, only when needed.

 Never use invalidation mode with a local store. The invalidation message will not remove entries in the local store, and some nodes will keep seeing the stale value.

An invalidation cache can also be configured with a special cache loader, `ClusterLoader`. When `ClusterLoader` is enabled, read operations that do not find the key on the local node will request it from all the other nodes first, and store it in memory locally. In certain situation it will store stale values, so only use it if you have a high tolerance for stale values.

Invalidation mode can be synchronous or asynchronous. When synchronous, a write blocks until all nodes in the cluster have evicted the stale value. When asynchronous, the originator broadcasts invalidation messages but doesn’t wait for responses. That means other nodes still see the stale value for a while after the write completed on the originator.

Transactions can be used to batch the invalidation messages. Transactions acquire the key lock on the primary owner. To find more about how primary owners are assigned, please read the Key Ownership section.

• With pessimistic locking, each write triggers a lock message, which is broadcast to all the nodes. During transaction commit, the originator broadcasts a one-phase prepare message (optionally fire-and-forget) which invalidates all affected keys and releases the locks.

• With optimistic locking, the originator broadcasts a prepare message, a commit message, and an unlock message (optional). Either the one-phase prepare or the unlock message is fire-and-forget, and the last message always releases the locks.

### 3.2. Replicated Caches

Entries written to a replicated cache on any node will be replicated to all other nodes in the cluster, and can be retrieved locally from any node. Replicated mode provides a quick and easy way to share state across a cluster, however replication practically only performs well in small clusters (under 10 nodes), due to the number of messages needed for a write scaling linearly with the cluster size. Infinispan can be configured to use UDP multicast, which mitigates this problem to some degree.

Each key has a primary owner, which serializes data container updates in order to provide consistency. To find more about how primary owners are assigned, please read the Key Ownership section.

Figure 2. Replicated mode

Replicated mode can be synchronous or asynchronous.

• Synchronous replication blocks the caller (e.g. on a `cache.put(key, value)`) until the modifications have been replicated successfully to all the nodes in the cluster.

• Asynchronous replication performs replication in the background, and write operations return immediately. Asynchronous replication is not recommended, because communication errors, or errors that happen on remote nodes are not reported to the caller.

If transactions are enabled, write operations are not replicated through the primary owner.

• With pessimistic locking, each write triggers a lock message, which is broadcast to all the nodes. During transaction commit, the originator broadcasts a one-phase prepare message and an unlock message (optional). Either the one-phase prepare or the unlock message is fire-and-forget.

• With optimistic locking, the originator broadcasts a prepare message, a commit message, and an unlock message (optional). Again, either the one-phase prepare or the unlock message is fire-and-forget.

### 3.3. Distributed Caches

Distribution tries to keep a fixed number of copies of any entry in the cache, configured as `numOwners`. This allows the cache to scale linearly, storing more data as nodes are added to the cluster.

As nodes join and leave the cluster, there will be times when a key has more or less than `numOwners` copies. In particular, if `numOwners` nodes leave in quick succession, some entries will be lost, so we say that a distributed cache tolerates `numOwners - 1` node failures.

The number of copies represents a trade-off between performance and durability of data. The more copies you maintain, the lower performance will be, but also the lower the risk of losing data due to server or network failures. Regardless of how many copies are maintained, distribution still scales linearly, and this is key to Infinispan’s scalability.

The owners of a key are split into one primary owner, which coordinates writes to the key, and zero or more backup owners. To find more about how primary and backup owners are assigned, please read the Key Ownership section.

Figure 3. Distributed mode

A read operation will request the value from the primary owner, but if it doesn’t respond in a reasonable amount of time, we request the value from the backup owners as well. (The `infinispan.stagger.delay` system property, in milliseconds, controls the delay between requests.) A read operation may require `0` messages if the key is present in the local cache, or up to `2 * numOwners` messages if all the owners are slow.

A write operation will also result in at most `2 * numOwners` messages: one message from the originator to the primary owner, `numOwners - 1` messages from the primary to the backups, and the corresponding ACK messages.

 Cache topology changes may cause retries and additional messages, both for reads and for writes.

Just as replicated mode, distributed mode can also be synchronous or asynchronous. And as in replicated mode, asynchronous replication is not recommended because it can lose updates. In addition to losing updates, asynchronous distributed caches can also see a stale value when a thread writes to a key and then immediately reads the same key.

Transactional distributed caches use the same kinds of messages as transactional replicated caches, except lock/prepare/commit/unlock messages are sent only to the affected nodes (all the nodes that own at least one key affected by the transaction) instead of being broadcast to all the nodes in the cluster. As an optimization, if the transaction writes to a single key and the originator is the primary owner of the key, lock messages are not replicated.

Even with synchronous replication, distributed caches are not linearizable. (For transactional caches, we say they do not support serialization/snapshot isolation.) We can have one thread doing a single put:

``````cache.get(k) -> v1
cache.put(k, v2)
cache.get(k) -> v2``````

But another thread might see the values in a different order:

``````cache.get(k) -> v2
cache.get(k) -> v1``````

The reason is that read can return the value from any owner, depending on how fast the primary owner replies. The write is not atomic across all the owners — in fact, the primary commits the update only after it receives a confirmation from the backup. While the primary is waiting for the confirmation message from the backup, reads from the backup will see the new value, but reads from the primary will see the old one.

#### 3.3.2. Key Ownership

Distributed caches split entries into a fixed number of segments and assign each segment to a list of owner nodes. Replicated caches do the same, with the exception that every node is an owner.

The first node in the list of owners is the primary owner. The other nodes in the list are backup owners. When the cache topology changes, because a node joins or leaves the cluster, the segment ownership table is broadcast to every node. This allows nodes to locate keys without making multicast requests or maintaining metadata for each key.

The `numSegments` property configures the number of segments available. However, the number of segments cannot change unless the cluster is restarted.

Likewise the key-to-segment mapping cannot change. Keys must always map to the same segments regardless of cluster topology changes. It is important that the key-to-segment mapping evenly distributes the number of segments allocated to each node while minimizing the number of segments that must move when the cluster topology changes.

You can customize the key-to-segment mapping by configuring a KeyPartitioner or by using the Grouping API.

However, Infinispan provides the following implementations:

SyncConsistentHashFactory

Uses an algorithm based on consistent hashing. Selected by default when server hinting is disabled.

This implementation always assigns keys to the same nodes in every cache as long as the cluster is symmetric. In other words, all caches run on all nodes. This implementation does have some negative points in that the load distribution is slightly uneven. It also moves more segments than strictly necessary on a join or leave.

TopologyAwareSyncConsistentHashFactory

Similar to `SyncConsistentHashFactory`, but adapted for Server Hinting. Selected by default when server hinting is enabled.

DefaultConsistentHashFactory

Achieves a more even distribution than `SyncConsistentHashFactory`, but with one disadvantage. The order in which nodes join the cluster determines which nodes own which segments. As a result, keys might be assigned to different nodes in different caches.

Was the default from version 5.2 to version 8.1 with server hinting disabled.

TopologyAwareConsistentHashFactory

Similar to DefaultConsistentHashFactory, but adapted for Server Hinting.

Was the default from version 5.2 to version 8.1 with server hinting enabled.

ReplicatedConsistentHashFactory

Used internally to implement replicated caches. You should never explicitly select this algorithm in a distributed cache.

##### Capacity Factors

Capacity factors are another way to customize the mapping of segments to nodes. The nodes in a cluster are not always identical. If a node has 2x the memory of a "regular" node, configuring it with a `capacityFactor` of `2` tells Infinispan to allocate 2x segments to that node. The capacity factor can be any non-negative number, and the hashing algorithm will try to assign to each node a load weighted by its capacity factor (both as a primary owner and as a backup owner).

One interesting use case is nodes with a capacity factor of `0`. This could be useful when some nodes are too short-lived to be useful as data owners, but they can’t use HotRod (or other remote protocols) because they need transactions. With cross-site replication as well, the "site master" should only deal with forwarding commands between sites and shouldn’t handle user requests, so it makes sense to configure it with a capacity factor of `0`.

#### 3.3.3. Zero Capacity Node

You might need to configure a whole node where the capacity factor is `0` for every cache, user defined caches and internal caches. When defining a zero capacity node, the node won’t hold any data. This is how you declare a zero capacity node:

``<cache-container zero-capacity-node="true" />``
``new GlobalConfigurationBuilder().zeroCapacityNode(true);``

However, note that this will be true for distributed caches only. If you are using replicated caches, the node will still keep a copy of the value. Use only distributed caches to make the best use of this feature.

#### 3.3.4. Hashing Configuration

This is how you configure hashing declaratively, via XML:

``<distributed-cache name="distributedCache" owners="2" segments="100" capacity-factor="2" />``

And this is how you can configure it programmatically, in Java:

``````Configuration c = new ConfigurationBuilder()
.clustering()
.cacheMode(CacheMode.DIST_SYNC)
.hash()
.numOwners(2)
.numSegments(100)
.capacityFactor(2)
.build();``````

#### 3.3.5. Initial cluster size

Infinispan’s very dynamic nature in handling topology changes (i.e. nodes being added / removed at runtime) means that, normally, a node doesn’t wait for the presence of other nodes before starting. While this is very flexible, it might not be suitable for applications which require a specific number of nodes to join the cluster before caches are started. For this reason, you can specify how many nodes should have joined the cluster before proceeding with cache initialization. To do this, use the `initialClusterSize` and `initialClusterTimeout` transport properties. The declarative XML configuration:

``<transport initial-cluster-size="4" initial-cluster-timeout="30000" />``

The programmatic Java configuration:

``````GlobalConfiguration global = new GlobalConfigurationBuilder()
.transport()
.initialClusterSize(4)
.initialClusterTimeout(30000)
.build();``````

The above configuration will wait for 4 nodes to join the cluster before initialization. If the initial nodes do not appear within the specified timeout, the cache manager will fail to start.

#### 3.3.6. L1 Caching

When L1 is enabled, a node will keep the result of remote reads locally for a short period of time (configurable, 10 minutes by default), and repeated lookups will return the local L1 value instead of asking the owners again.

Figure 4. L1 caching

L1 caching is not free though. Enabling it comes at a cost, and this cost is that every entry update must broadcast an invalidation message to all the nodes. L1 entries can be evicted just like any other entry when the the cache is configured with a maximum size. Enabling L1 will improve performance for repeated reads of non-local keys, but it will slow down writes and it will increase memory consumption to some degree.

Is L1 caching right for you? The correct approach is to benchmark your application with and without L1 enabled and see what works best for your access pattern.

#### 3.3.7. Server Hinting

The following topology hints can be specified:

Machine

This is probably the most useful, when multiple JVM instances run on the same node, or even when multiple virtual machines run on the same physical machine.

Rack

In larger clusters, nodes located on the same rack are more likely to experience a hardware or network failure at the same time.

Site

Some clusters may have nodes in multiple physical locations for extra resilience. Note that Cross site replication is another alternative for clusters that need to span two or more data centres.

All of the above are optional. When provided, the distribution algorithm will try to spread the ownership of each segment across as many sites, racks, and machines (in this order) as possible.

##### Configuration

The hints are configured at transport level:

``````<transport
cluster="MyCluster"
machine="LinuxServer01"
rack="Rack01"
site="US-WestCoast" />``````

#### 3.3.8. Key affinity service

In a distributed cache, a key is allocated to a list of nodes with an opaque algorithm. There is no easy way to reverse the computation and generate a key that maps to a particular node. However, we can generate a sequence of (pseudo-)random keys, see what their primary owner is, and hand them out to the application when it needs a key mapping to a particular node.

##### API

Following code snippet depicts how a reference to this service can be obtained and used.

``````// 1. Obtain a reference to a cache
Cache cache = ...

// 2. Create the affinity service
KeyAffinityService keyAffinityService = KeyAffinityServiceFactory.newLocalKeyAffinityService(
cache,
new RndKeyGenerator(),
100);

// 3. Obtain a key for which the local node is the primary owner

// 4. Insert the key in the cache
cache.put(localKey, "yourValue");``````

The service is started at step 2: after this point it uses the supplied Executor to generate and queue keys. At step 3, we obtain a key from the service, and at step 4 we use it.

##### Lifecycle

`KeyAffinityService` extends `Lifecycle`, which allows stopping and (re)starting it:

``````public interface Lifecycle {
void start();
void stop();
}``````

The service is instantiated through `KeyAffinityServiceFactory`. All the factory methods have an `Executor` parameter, that is used for asynchronous key generation (so that it won’t happen in the caller’s thread). It is the user’s responsibility to handle the shutdown of this `Executor`.

The `KeyAffinityService`, once started, needs to be explicitly stopped. This stops the background key generation and releases other held resources.

The only situation in which `KeyAffinityService` stops by itself is when the cache manager with which it was registered is shutdown.

##### Topology changes

When the cache topology changes (i.e. nodes join or leave the cluster), the ownership of the keys generated by the `KeyAffinityService` might change. The key affinity service keep tracks of these topology changes and doesn’t return keys that would currently map to a different node, but it won’t do anything about keys generated earlier.

As such, applications should treat `KeyAffinityService` purely as an optimization, and they should not rely on the location of a generated key for correctness.

In particular, applications should not rely on keys generated by `KeyAffinityService` for the same address to always be located together. Collocation of keys is only provided by the Grouping API.

##### The Grouping API

Complementary to Key affinity service, the grouping API allows you to co-locate a group of entries on the same nodes, but without being able to select the actual nodes.

##### How does it work?

By default, the segment of a key is computed using the key’s `hashCode()`. If you use the grouping API, Infinispan will compute the segment of the group and use that as the segment of the key. See the Key Ownership section for more details on how segments are then mapped to nodes.

When the group API is in use, it is important that every node can still compute the owners of every key without contacting other nodes. For this reason, the group cannot be specified manually. The group can either be intrinsic to the entry (generated by the key class) or extrinsic (generated by an external function).

##### How do I use the grouping API?

First, you must enable groups. If you are configuring Infinispan programmatically, then call:

``````Configuration c = new ConfigurationBuilder()
.clustering().hash().groups().enabled()
.build();``````

Or, if you are using XML:

``````<distributed-cache>
<groups enabled="true"/>
</distributed-cache>``````

If you have control of the key class (you can alter the class definition, it’s not part of an unmodifiable library), then we recommend using an intrinsic group. The intrinsic group is specified by adding the `@Group` annotation to a method. Let’s take a look at an example:

``````class User {
...
String office;
...

public int hashCode() {
// Defines the hash for the key, normally used to determine location
...
}

// Override the location by specifying a group
// All keys in the same group end up with the same owners
@Group
public String getOffice() {
return office;
}
}
}``````
 The group method must return a `String`

If you don’t have control over the key class, or the determination of the group is an orthogonal concern to the key class, we recommend using an extrinsic group. An extrinsic group is specified by implementing the `Grouper` interface.

``````public interface Grouper<T> {
String computeGroup(T key, String group);

Class<T> getKeyType();
}``````

If multiple `Grouper` classes are configured for the same key type, all of them will be called, receiving the value computed by the previous one. If the key class also has a `@Group` annotation, the first `Grouper` will receive the group computed by the annotated method. This allows you even greater control over the group when using an intrinsic group. Let’s take a look at an example `Grouper` implementation:

``````public class KXGrouper implements Grouper<String> {

// The pattern requires a String key, of length 2, where the first character is
// "k" and the second character is a digit. We take that digit, and perform
// modular arithmetic on it to assign it to group "0" or group "1".

#### 6.1.1. Default JGroups Stacks

File name Stack name Description

`default-jgroups-udp.xml`

`udp`

Uses UDP for transport and UDP multicast for discovery. Suitable for larger clusters (over 100 nodes) or if you are using replicated caches or invalidation mode. Minimises the number of open sockets.

`default-jgroups-tcp.xml`

`tcp`

Uses TCP for transport and UDP multicast for discovery. Suitable for smaller clusters (under 100 nodes) only if you are using distributed caches because TCP is more efficient than UDP as a point-to-point protocol.

`default-jgroups-ec2.xml`

`ec2`

Uses TCP for transport and `S3_PING` for discovery. Suitable for Amazon EC2 nodes where UDP multicast is not available.

`default-jgroups-kubernetes.xml`

`kubernetes`

Uses TCP for transport and `DNS_PING` for discovery. Suitable for Kubernetes and Red Hat OpenShift nodes where UDP multicast is not always available.

`default-jgroups-google.xml`

`google`

Uses TCP for transport and `GOOGLE_PING2` for discovery. Suitable for Google Cloud Platform nodes where UDP multicast is not available.

`default-jgroups-azure.xml`

`azure`

Uses TCP for transport and `AZURE_PING` for discovery. Suitable for Microsoft Azure nodes where UDP multicast is not available.

Next Steps

After you get up and running with the default JGroups stacks, use inheritance to combine, extend, remove, and replace stack properties. See Adjusting and Tuning JGroups Stacks.

#### 6.1.2. Default JGroups Stacks

Infinispan uses the following JGroups `TCP` and `UDP` stacks by default:

``````<stack name="udp">
<transport type="UDP" socket-binding="jgroups-udp"/>
<protocol type="PING"/>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC_NB"/>
<protocol type="MFC_NB"/>
<protocol type="FRAG3"/>
</stack>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="MPING" socket-binding="jgroups-mping"/>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2">
<property name="use_mcast_xmit">false</property>
</protocol>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC_NB"/>
<protocol type="FRAG3"/>
</stack>``````
 To improve performance, Infinispan uses some values for properties other than the JGroups default values. You should examine the following files to review the JGroups configuration for Infinispan: Infinispan servers `jgroups-defaults.xml` `infinispan-jgroups.xml` Embedded Infinispan `default-jgroups-tcp.xml` `default-jgroups-udp.xml`

The default `TCP` stack uses the `MPING` protocol for discovery, which uses `UDP` multicast.

Reference

### 6.2. Using Inline JGroups Stacks

Use inline JGroups stack definitions to customize cluster transport for optimal network performance.

 Use inheritance with inline JGroups stacks to tune and customize specific transport properties.
Procedure
• Embed your custom JGroups stack definitions in `infinispan.xml` as in the following example:

``````<infinispan>
<!-- jgroups is the parent for stack declarations. -->
<jgroups>
<!-- Add JGroups stacks for Infinispan clustering. -->
<stack name="prod">
<TCP bind_port="7800" port_range="30" recv_buf_size="20000000" send_buf_size="640000"/>
mcast_addr="${jgroups.mping.mcast_addr:228.2.4.6}" mcast_port="${jgroups.mping.mcast_port:43366}"
num_discovery_runs="3"
ip_ttl="${jgroups.udp.ip_ttl:2}"/> <MERGE3 /> <FD_SOCK /> <FD_ALL timeout="3000" interval="1000" timeout_check_interval="1000" /> <VERIFY_SUSPECT timeout="1000" /> <pbcast.NAKACK2 use_mcast_xmit="false" xmit_interval="100" xmit_table_num_rows="50" xmit_table_msgs_per_row="1024" xmit_table_max_compaction_time="30000" /> <UNICAST3 xmit_interval="100" xmit_table_num_rows="50" xmit_table_msgs_per_row="1024" xmit_table_max_compaction_time="30000" /> <pbcast.STABLE stability_delay="200" desired_avg_gossip="2000" max_bytes="1M" /> <pbcast.GMS print_local_addr="false" join_timeout="${jgroups.join_timeout:2000}" />
<UFC_NB max_credits="3m" min_threshold="0.40" />
<MFC_NB max_credits="3m" min_threshold="0.40" />
<FRAG3 />
</stack>
</jgroups>
<cache-container default-cache="replicatedCache">
<!-- Add JGroups stacks to clustered caches. -->
<transport stack="prod" />
...
</cache-container>
</infinispan>``````

### 6.3. Adjusting and Tuning JGroups Stacks

Use inheritance to combine, extend, remove, and replace specific properties in the default JGroups stacks or custom configurations.

Procedure
1. Add a new JGroups stack declaration.

2. Name a parent stack with the `extends` attribute.

3. Modify transport properties with the `stack.combine` attribute.

For example, you want to evaluate using a Gossip router for cluster discovery using a `TCP` stack configuration named `prod`.

You can create a new stack named `gossip-prod` that inherits from `prod` and use `stack.combine` to change properties for the Gossip router configuration, as in the following example:

``````<jgroups>
...
<!-- "gossip-prod" inherits properties from "prod" -->
<stack name="gossip-prod" extends="prod">
<!-- Use TCPGOSSIP discovery instead of MPING. -->

#### 6.5.1. System Properties for Default JGroups Stacks

`default-jgroups-udp.xml`
System Property Description Default Value Required/Optional

`jgroups.udp.mcast_addr`

IP address for multicast, both discovery and inter-cluster communication. The IP address must be a valid "class D" address that is suitable for IP multicast.

`228.6.7.8`

Optional

`jgroups.udp.mcast_port`

Port for the multicast socket.

`46655`

Optional

`jgroups.udp.ip_ttl`

Specifies the time-to-live (TTL) for IP multicast packets. The value defines the number of network hops a packet can make before it is dropped.

2

Optional

`default-jgroups-tcp.xml`
System Property Description Default Value Required/Optional

`jgroups.tcp.address`

`127.0.0.1`

Optional

`jgroups.tcp.port`

Port for the TCP socket.

`7800`

Optional

`jgroups.udp.mcast_addr`

IP address for multicast discovery. The IP address must be a valid "class D" address that is suitable for IP multicast.

`228.6.7.8`

Optional

`jgroups.udp.mcast_port`

Port for the multicast socket.

`46655`

Optional

`jgroups.udp.ip_ttl`

Specifies the time-to-live (TTL) for IP multicast packets. The value defines the number of network hops a packet can make before it is dropped.

2

Optional

`default-jgroups-ec2.xml`
System Property Description Default Value Required/Optional

`jgroups.tcp.address`

`127.0.0.1`

Optional

`jgroups.tcp.port`

Port for the TCP socket.

`7800`

Optional

`jgroups.s3.access_key`

Amazon S3 access key for an S3 bucket.

No default value.

Optional

`jgroups.s3.secret_access_key`

Amazon S3 secret key used for an S3 bucket.

No default value.

Optional

`jgroups.s3.bucket`

Name of the Amazon S3 bucket. The name must already exist and be unique.

No default value.

Optional

`default-jgroups-kubernetes.xml`
System Property Description Default Value Required/Optional

`jgroups.tcp.address`

`eth0`

Optional

`jgroups.tcp.port`

Port for the TCP socket.

`7800`

Optional

Reference

### 6.6. Using Custom JChannels

Construct custom JGroups JChannels as in the following example:

``````GlobalConfigurationBuilder global = new GlobalConfigurationBuilder();
JChannel jchannel = new JChannel();
// Configure the jchannel to your needs.
JGroupsTransport transport = new JGroupsTransport(jchannel);
global.transport().transport(transport);
new DefaultCacheManager(global.build());``````
 Infinispan cannot use custom JChannels that are already connected.
Reference

JGroups JChannel

## 7. Configuring Cluster Discovery

Running Infinispan on hosted services requires using discovery mechanisms that are adapted to network constraints that individual cloud providers impose. For instance, Amazon EC2 does not allow UDP multicast.

Infinispan can use the following cloud discovery mechanisms:

• Generic discovery protocols (`TCPPING` and `TCPGOSSIP`)

• JGroups PING protocols (`KUBE_PING` and `DNS_PING`)

• Cloud-specific PING protocols

 Embedded Infinispan requires cloud provider dependencies.

### 7.1. TCPPING

`TCPPING` is a generic JGroups discovery mechanism that uses a static list of IP addresses for cluster members.

To use `TCPPING`, you must add the list of static IP addresses to the JGroups configuration file for each Infinispan node. However, the drawback to `TCPPING` is that it does not allow nodes to dynamically join Infinispan clusters.

TCPPING configuration example
``````<config>
<TCP bind_port="7800" />
<TCPPING timeout="3000"
...
...
</config>``````

### 7.3. DNS_PING

JGroups `DNS_PING` queries DNS servers to discover Infinispan cluster members in Kubernetes environments such as OKD and Red Hat OpenShift.

DNS_PING configuration example
``````<stack name="dns-ping">
...
<dns.DNS_PING
dns_query="myservice.myproject.svc.cluster.local" />
...
</stack>``````
Reference

### 7.4. KUBE_PING

JGroups `Kube_PING` uses a Kubernetes API to discover Infinispan cluster members in environments such as OKD and Red Hat OpenShift.

KUBE_PING configuration example
``````<config>
<TCP bind_addr="${match-interface:eth.*}" /> <kubernetes.KUBE_PING /> ... ... </config>`````` KUBE_PING configuration requirements • Your `KUBE_PING` configuration must bind the JGroups stack to the `eth0` network interface. Docker containers use `eth0` for communication. • `KUBE_PING` uses environment variables inside containers for configuration. The `KUBERNETES_NAMESPACE` environment variable must specify a valid namespace. You can either hardcode it or populate it via the Kubernetes Downward API. • `KUBE_PING` requires additional privileges on Red Hat OpenShift. Assuming that `oc project -q` returns the current namespace and `default` is the service account name, you can run: ``$ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n$(oc project -q)``
Reference

### 7.5. NATIVE_S3_PING

On Amazon Web Service (AWS), use the `S3_PING` protocol for discovery.

You can configure JGroups to use shared storage to exchange the details of Infinispan nodes. `NATIVE_S3_PING` allows Amazon S3 as the shared storage but requires both Amazon S3 and EC2 subscriptions.

NATIVE_S3_PING configuration example
``````<config>
<TCP bind_port="7800" />
<org.jgroups.aws.s3.NATIVE_S3_PING
region_name="replace this with your region (e.g. eu-west-1)"
bucket_name="replace this with your bucket name"
bucket_prefix="replace this with a prefix to use for entries in the bucket (optional)" />
</config>``````
NATIVE_S3_PING dependencies for embedded Infinispan
``````<dependency>
<groupId>org.jgroups.aws.s3</groupId>
<artifactId>native-s3-ping</artifactId>
<!-- Replace ${version.jgroups.native_s3_ping} with the version of the native-s3-ping module you want to use. --> <version>${version.jgroups.native_s3_ping}</version>
</dependency>``````

### 7.6. JDBC_PING

`JDBC_PING` uses JDBC connections to shared databases, such as Amazon RDS on EC2, to store information about Infinispan nodes.

Reference

JDBC_PING Wiki

### 7.7. AZURE_PING

On Microsoft Azure, use a generic discovery protocol or `AZURE_PING`, which uses shared Azure Blob Storage to store discovery information.

AZURE_PING configuration example
``````<azure.AZURE_PING
storage_account_name="replace this with your account name"
storage_access_key="replace this with your access key"
container="replace this with your container name"
/>``````
AZURE_PING dependencies for embedded Infinispan
``````<dependency>
<groupId>org.jgroups.azure</groupId>
<artifactId>jgroups-azure</artifactId>
<!-- Replace ${version.jgroups.azure} with the version of the jgroups-azure module you want to use. --> <version>${version.jgroups.azure}</version>
</dependency>``````

On Google Compute Engine (GCE), use a generic discovery protocol or `GOOGLE2_PING`, which uses Google Cloud Storage (GCS) to store information about the cluster members.
``<org.jgroups.protocols.google.GOOGLE_PING2 location="${jgroups.google.bucket_name}" />`` GOOGLE2_PING dependencies for embedded Infinispan ``````<dependency> <groupId>org.jgroups.google</groupId> <artifactId>jgroups-google</artifactId> <!-- Replace${version.jgroups.google} with the