The Infinispan Operator provides operational intelligence and reduces management complexity for deploying Infinispan on Kubernetes and Red Hat OpenShift.
Infinispan Operator 2.0 corresponds to Infinispan 11.0.
1. Installing Infinispan Operator
Install Infinispan Operator into a Kubernetes namespace to create and manage Infinispan clusters.
1.1. Installing Infinispan Operator on Red Hat OpenShift
Create subscriptions to Infinispan Operator on OpenShift so you can install different Infinispan versions and receive automatic updates.
Automatic updates apply to Infinispan Operator first and then for each Infinispan node. Infinispan Operator updates clusters one node at a time, gracefully shutting down each node and then bringing it back online with the updated version before going on to the next node.
-
Access to OperatorHub running on OpenShift. Some OpenShift environments, such as OpenShift Container Platform, can require administrator credentials.
-
Have an OpenShift project for Infinispan Operator if you plan to install it into a specific namespace.
-
Log in to the OpenShift Web Console.
-
Navigate to OperatorHub.
-
Find and select Infinispan Operator.
-
Select Install and continue to Create Operator Subscription.
-
Specify options for your subscription.
- Installation Mode
-
You can install Infinispan Operator into a Specific namespace or All namespaces.
- Update Channel
-
Subscribe to updates for Infinispan Operator versions.
- Approval Strategies
-
When new Infinispan versions become available, you can install updates manually or let Infinispan Operator install them automatically.
-
Select Subscribe to install Infinispan Operator.
-
Navigate to Installed Operators to verify the Infinispan Operator installation.
1.2. Installing Infinispan Operator from OperatorHub.io
Use the command line to install Infinispan Operator from OperatorHub.io.
-
OKD 3.11 or later.
-
Kubernetes 1.11 or later.
-
Have administrator access on the Kubernetes cluster.
-
Have a
kubectl
oroc
client.
-
Navigate to the Infinispan Operator entry on OperatorHub.io.
-
Follow the instructions to install Infinispan Operator into your Kubernetes cluster.
1.3. Building and Installing Infinispan Operator Manually
Manually build and install Infinispan Operator from the GitHub repository.
-
Follow the appropriate instructions in the Infinispan Operator README.
2. Getting Started with Infinispan Operator
Infinispan Operator lets you create, configure, and manage Infinispan clusters.
-
Install Infinispan Operator.
-
Have an
oc
or akubectl
client.
2.1. Infinispan Custom Resource (CR)
Infinispan Operator adds a new Custom Resource (CR) of type Infinispan
that lets
you handle Infinispan clusters as complex units on Kubernetes.
Infinispan Operator watches for Infinispan
Custom Resources (CR) that you use to
instantiate and configure Infinispan clusters and manage Kubernetes resources, such
as StatefulSets and Services. In this way, the Infinispan
CR is your
primary interface to Infinispan on Kubernetes.
The minimal Infinispan
CR is as follows:
apiVersion: infinispan.org/v1 (1)
kind: Infinispan (2)
metadata:
name: example-infinispan (3)
spec:
replicas: 2 (4)
1 | Declares the Infinispan API version. |
2 | Declares the Infinispan CR. |
3 | Names the Infinispan cluster. |
4 | Specifies the number of nodes in the Infinispan cluster. |
2.2. Creating Infinispan Clusters
Use Infinispan Operator to create clusters of two or more Infinispan nodes.
-
Specify the number of Infinispan nodes in the cluster with
spec.replicas
in yourInfinispan
CR.For example, create a
cr_minimal.yaml
file as follows:$ cat > cr_minimal.yaml<<EOF apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: example-infinispan spec: replicas: 2 EOF
-
Apply your
Infinispan
CR.$ kubectl apply -f cr_minimal.yaml
-
Watch Infinispan Operator create the Infinispan nodes.
$ kubectl get pods -w NAME READY STATUS RESTARTS AGE example-infinispan-1 0/1 ContainerCreating 0 4s example-infinispan-2 0/1 ContainerCreating 0 4s example-infinispan-3 0/1 ContainerCreating 0 5s infinispan-operator-0 1/1 Running 0 3m example-infinispan-3 1/1 Running 0 8s example-infinispan-2 1/1 Running 0 8s example-infinispan-1 1/1 Running 0 8s
Try changing the value of replicas:
and watching Infinispan Operator scale the
cluster up or down.
2.3. Verifying Infinispan Clusters
Review log messages to ensure that Infinispan nodes receive clustered views.
-
Do either of the following:
-
Retrieve the cluster view from logs.
$ kubectl logs example-infinispan-0 | grep ISPN000094 INFO [org.infinispan.CLUSTER] (MSC service thread 1-2) \ ISPN000094: Received new cluster view for channel infinispan: \ [example-infinispan-0|0] (1) [example-infinispan-0] INFO [org.infinispan.CLUSTER] (jgroups-3,example-infinispan-0) \ ISPN000094: Received new cluster view for channel infinispan: \ [example-infinispan-0|1] (2) [example-infinispan-0, example-infinispan-1]
-
Retrieve the
Infinispan
CR for Infinispan Operator.$ kubectl get infinispan -o yaml
The response indicates that Infinispan pods have received clustered views:
conditions: - message: 'View: [example-infinispan-0, example-infinispan-1]' status: "True" type: wellFormed
-
Use
|
3. Setting Up Infinispan Services
Use Infinispan Operator to create clusters of either Cache Service or Data Grid Service nodes.
3.1. Service Types
Services are stateful applications, based on the Infinispan server image, that provide flexible and robust in-memory data storage.
Use Cache Service if you want a volatile, low-latency data store with minimal configuration. Cache Service nodes:
-
Automatically scale to meet capacity when data storage demands go up or down.
-
Synchronously distribute data to ensure consistency.
-
Replicates each entry in the cache across the cluster.
-
Store cache entries off-heap and use eviction for JVM efficiency.
-
Ensure data consistency with a default partition handling configuration.
Because Cache Service nodes are volatile you lose all data when you apply
changes to the cluster with the |
Use Data Grid Service if you want to:
-
Back up data across global clusters with cross-site replication.
-
Create caches with any valid configuration.
-
Add file-based cache stores to save data in the persistent volume.
-
Use Infinispan search and other advanced capabilities.
3.2. Creating Cache Service Nodes
By default, Infinispan Operator creates Infinispan clusters with Cache Service nodes.
-
Create an
Infinispan
CR.apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: example-infinispan spec: replicas: 2 service: type: Cache (1)
1 Creates nodes Cache Service nodes. This is the default for the Infinispan
CR. -
Apply your
Infinispan
CR to create the cluster.
3.2.1. Configuring Automatic Scaling
If you create clusters with Cache Service nodes, Infinispan Operator can automatically scale nodes up or down based on memory usage for the default cache.
Infinispan Operator monitors default caches on Cache Service nodes. As you add data to the cache, memory usage increases. When it detects that the cluster needs additional capacity, Infinispan Operator creates new nodes rather than eviciting entries. Likewise, if it detects that memory usage is below a certain threshold, Infinispan Operator shuts down nodes.
Automatic scaling works with the default cache only. If you plan to add other
caches to your cluster, you should not include the |
-
Add the
spec.autoscale
resource to yourInfinispan
CR to enable automatic scaling. -
Configure memory usage thresholds and number of nodes for your cluster with the
autoscale
field.spec: ... service: type: Cache autoscale: maxMemUsagePercent: 70 (1) maxReplicas: 5 (2) minMemUsagePercent: 30 (3) minReplicas: 2 (4)
1 Configures the maximum threshold, as a percentage, for memory usage on each node. When Infinispan Operator detects that any node in the cluster reaches the threshold, it creates a new node if possible. If Infinispan Operator cannot create a new node then it performs eviction when memory usage reaches 100 percent. 2 Defines the maximum number of number of nodes for the cluster. 3 Configures the minimum threshold, as a percentage, for memory usage across the cluster. When Infinispan Operator detects that memory usage falls below the minimum, it shuts down nodes. 4 Defines the minimum number of number of nodes for the cluster. -
Apply the changes.
3.2.2. Configuring the Number of Owners
The number of owners controls how many copies of each cache entry are replicated across your Infinispan cluster. The default for Cache Service nodes is two, which duplicates each entry to prevent data loss.
-
Specify the number of owners with the
spec.service.replicationFactor
resource in yourInfinispan
CR as follows:spec: ... service: type: Cache replicationFactor: 3 (1)
1 Configures three replicas for each cache entry. -
Apply the changes.
3.2.3. Cache Service Resources
apiVersion: infinispan.org/v1
kind: Infinispan
metadata:
# Names the cluster.
name: example-infinispan
spec:
# Specifies the number of nodes in the cluster.
replicas: 4
service:
# Configures the service type as Cache.
type: Cache
# Sets the number of replicas for each entry across the cluster.
replicationFactor: 2
# Enables and configures automatic scaling.
autoscale:
maxMemUsagePercent: 70
maxReplicas: 5
minMemUsagePercent: 30
minReplicas: 2
# Configures authentication and encryption.
security:
# Defines a secret with custom credentials.
endpointSecretName: endpoint-identities
# Adds a custom TLS certificate to encrypt client connections.
endpointEncryption:
type: Secret
certSecretName: tls-secret
# Sets container resources.
container:
extraJvmOpts: "-XX:NativeMemoryTracking=summary"
cpu: "2000m"
memory: 1Gi
# Configures logging levels.
logging:
categories:
org.infinispan: trace
org.jgroups: trace
# Configures how the cluster is exposed on the network.
expose:
type: LoadBalancer
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: infinispan-pod
clusterName: example-infinispan
infinispan_cr: example-infinispan
topologyKey: "kubernetes.io/hostname"
3.3. Creating Data Grid Service Nodes
To use custom cache definitions along with Infinispan capabilities such as cross-site replication, create clusters of Data Grid Service nodes.
-
Specify
DataGrid
as the value forspec.service.type
in yourInfinispan
CR.apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: example-infinispan spec: replicas: 2 service: type: DataGrid
You cannot change the
spec.service.type
field after you create nodes. To change the service type, you must delete the existing nodes and create new ones. -
Configure nodes with any other Data Grid Service resources.
-
Apply your
Infinispan
CR to create the cluster.
3.3.1. Data Grid Service Resources
apiVersion: infinispan.org/v1
kind: Infinispan
metadata:
# Names the cluster.
name: example-infinispan
spec:
# Specifies the number of nodes in the cluster.
replicas: 6
service:
# Configures the service type as Data Grid.
type: DataGrid
# Configures storage resources.
container:
storage: 2Gi
storageClassName: my-storage-class
# Configures cross-site replication.
sites:
local:
name: azure
expose:
type: LoadBalancer
locations:
- name: azure
url: openshift://api.azure.host:6443
secretName: azure-token
- name: aws
url: openshift://api.aws.host:6443
secretName: aws-token
# Configures authentication and encryption.
security:
# Defines a secret with custom credentials.
endpointSecretName: endpoint-identities
# Adds a custom TLS certificate to encrypt client connections.
endpointEncryption:
type: Secret
certSecretName: tls-secret
# Sets container resources.
container:
extraJvmOpts: "-XX:NativeMemoryTracking=summary"
cpu: "1000m"
memory: 1Gi
# Configures logging levels.
logging:
categories:
org.infinispan: debug
org.jgroups: debug
org.jgroups.protocols.TCP: error
org.jgroups.protocols.relay.RELAY2: fatal
# Configures how the cluster is exposed on the network.
expose:
type: LoadBalancer
# Configures affinity and anti-affinity strategies.
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: infinispan-pod
clusterName: example-infinispan
infinispan_cr: example-infinispan
topologyKey: "kubernetes.io/hostname"
3.4. Adding Labels to Infinispan Resources
Attach key/value labels to pods and services that Infinispan Operator creates and manages. These labels help you identify relationships between objects to better organize and monitor Infinispan resources.
-
Open your
Infinispan
CR for editing. -
Add any labels that you want Infinispan Operator to attach to resources with
metadata.annotations
. -
Add values for your labels with
metadata.labels
.apiVersion: infinispan.org/v1 kind: Infinispan metadata: annotations: # Add labels that you want to attach to services. infinispan.org/targetLabels: svc-label1, svc-label2 # Add labels that you want to attach to pods. infinispan.org/podTargetLabels: pod-label1, pod-label2 labels: # Add values for your labels. svc-label1: svc-value1 svc-label2: svc-value2 pod-label1: pod-value1 pod-label2: pod-value2 # The operator does not attach these labels to resources. my-label: my-value environment: development
-
Apply your
Infinispan
CR.
3.4.1. Global Labels for Infinispan Operator
Global labels are automatically propagated to all Infinispan pods and services.
You can add and modify global labels for Infinispan Operator with the env
field in the operator yaml.
# Defines global labels for services.
- name: INFINISPAN_OPERATOR_TARGET_LABELS
value: |
{"svc-label1":"svc-value1",
"svc-label2":"svc-value2"}
# Defines global labels for pods.
- name: INFINISPAN_OPERATOR_POD_TARGET_LABELS
value: |
{"pod-label1":"pod-value1",
"pod-label2":"pod-value2"}
4. Adjusting Container Specifications
You can allocate CPU and memory resources, specify JVM options, and configure storage for Infinispan nodes.
4.1. JVM, CPU, and Memory Resources
spec:
...
container:
extraJvmOpts: "-XX:NativeMemoryTracking=summary" (1)
cpu: "1000m" (2)
memory: 1Gi (3)
1 | Specifies JVM options. |
2 | Allocates host CPU resources to node, measured in CPU units. |
3 | Allocates host memory resources to nodes, measured in bytes. |
When Infinispan Operator creates Infinispan clusters, it uses spec.container.cpu
and spec.container.memory
to:
-
Ensure that Kubernetes has sufficient capacity to run the Infinispan node. By default Infinispan Operator requests 512Mi of
memory
and 0.5cpu
from the Kubernetes scheduler. -
Constrain node resource usage. Infinispan Operator sets the values of
cpu
andmemory
as resource limits.
By default, Infinispan Operator does not log garbage collection (GC) messages. You can optionally add the following JVM options to direct GC messages to stdout:
extraJvmOpts: "-Xlog:gc*:stdout:time,level,tags"
4.2. Storage Resources
By default, Infinispan Operator allocates 1Gi
for storage for both
Cache Service and Data Grid Service nodes. You can configure storage resources
for Data Grid Service nodes but not Cache Service nodes.
spec:
...
service:
type: DataGrid
container:
storage: 2Gi (1)
storageClassName: my-storage-class (2)
1 | Configures the storage size for Data Grid Service nodes. |
2 | Specifies the name of a StorageClass object to use for the persistent volume claim. If you include this field, you must specify an existing storage class as the value. If you do not include this field, the persistent volume claim uses the storage class that has the storageclass.kubernetes.io/is-default-class annotation set to true . |
Infinispan Operator mounts persistent volumes at:
/opt/infinispan/server/data
Persistent volume claims use the |
5. Stopping and Starting Infinispan Clusters
Stop and start Infinispan clusters with Infinispan Operator.
Both Cache Service and Data Grid Service store permanent cache definitions in persistent volumes so they are still available after cluster restarts.
Data Grid Service nodes can write all cache entries to persistent storage during cluster shutdown if you add a file-based cache store.
5.1. Shutting Down Infinispan Clusters
Shutting down Cache Service nodes removes all data in the cache. For Data Grid Service nodes, you should configure the storage size for Data Grid Service nodes to ensure that the persistent volume can hold all your data.
If the available container storage is less than the amount of memory available to Data Grid Service nodes, Infinispan writes the following exception to logs and data loss occurs during shutdown:
WARNING: persistent volume size is less than memory size. Graceful shutdown may not work.
-
Set the value of
replicas
to0
and apply the changes.
spec: replicas: 0
5.2. Restarting Infinispan Clusters
Complete the following procedure to restart Infinispan clusters after shutdown.
For Data Grid Service nodes, you must restart clusters with the same number of
nodes before shutdown. For example, you shut down a cluster of 6 nodes. When
you restart that cluster, you must specify 6 as the value for spec.replicas
.
This allows Infinispan to restore the distribution of data across the cluster. When all nodes in the cluster are running, you can then add or remove nodes.
You can find the correct number of nodes for Infinispan clusters as follows:
$ kubectl get infinispan example-infinispan -o=jsonpath='{.status.replicasWantedAtRestart}'
-
Set the value of
spec.replicas
to the appropriate number of nodes for your cluster, for example:spec: replicas: 6
6. Configuring Network Access to Infinispan
Expose Infinispan clusters so you can access Infinispan Console, the Infinispan command line interface (CLI), REST API, and Hot Rod endpoint.
6.1. Getting the Service for Internal Connections
By default, Infinispan Operator creates a service that provides access to Infinispan clusters from clients running on Kubernetes.
This internal service has the same name as your Infinispan cluster, for example:
metadata:
name: example-infinispan
-
Check that the internal service is available as follows:
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) example-infinispan ClusterIP 192.0.2.0 <none> 11222/TCP
6.2. Exposing Infinispan Through Load Balancers
Use a load balancer service to make Infinispan clusters available to clients running outside Kubernetes.
To access Infinispan with unencrypted Hot Rod client connections you must use a load balancer service. |
-
Include
spec.expose
in yourInfinispan
CR. -
Specify
LoadBalancer
as the service type withspec.expose.type
.spec: ... expose: type: LoadBalancer (1) nodePort: 30000 (2)
1 Exposes Infinispan on the network through a load balancer service on port 11222
.2 Optionally defines a node port to which the load balancer service forwards traffic. -
Apply the changes.
-
Verify that the
-external
service is available.$ kubectl get services | grep external NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) example-infinispan-external LoadBalancer 192.0.2.24 hostname.com 11222/TCP
6.3. Exposing Infinispan Through Node Ports
Use a node port service to expose Infinispan clusters on the network.
-
Include
spec.expose
in yourInfinispan
CR. -
Specify
NodePort
as the service type withspec.expose.type
.spec: ... expose: type: NodePort (1) nodePort: 30000 (2)
1 Exposes Infinispan on the network through a node port service. 2 Defines the port where Infinispan is exposed. If you do not define a port, the platform selects one. -
Apply the changes.
-
Verify that the
-external
service is available.$ kubectl get services | grep external NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) example-infinispan-external NodePort 192.0.2.24 <none> 11222:30000/TCP
6.4. Exposing Infinispan Through Routes
Use a Kubernetes Ingress or an OpenShift Route with passthrough encryption to make Infinispan clusters available on the network.
-
Include
spec.expose
in yourInfinispan
CR. -
Specify
Route
as the service type withspec.expose.type
. -
Optionally add a hostname with
spec.expose.host
.spec: ... expose: type: Route (1) host: www.example.org (2)
1 Exposes Infinispan on the network through a Kubernetes Ingress or OpenShift Route. 2 Optionally specifies the hostname where Infinispan is exposed. -
Apply the changes.
-
Verify that the route is available.
$ kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE example-infinispan <none> * 443 73s
When you create a route, it exposes a port on the network that accepts client connections and redirects traffic to Infinispan services that listen on port 11222
.
The port where the route is available depends on whether you use encryption or not.
Port | Description |
---|---|
|
Encryption is disabled. |
|
Encryption is enabled. |
7. Securing Infinispan Connections
Secure client connections with authentication and encryption to prevent network intrusion and protect your data.
7.1. Configuring Authentication
Application users need credentials to access Infinispan clusters. You can use default, generated credentials or add your own.
7.1.1. Default Credentials
Infinispan Operator generates base64-encoded default credentials stored in an authentication secret named example-infinispan-generated-secret
Username | Description |
---|---|
|
Default application user. |
|
Internal user that interacts with Infinispan clusters. |
7.1.2. Retrieving Credentials
Get credentials from authentication secrets to access Infinispan clusters.
-
Retrieve credentials from authentication secrets, as in the following example:
$ kubectl get secret example-infinispan-generated-secret
Base64-decode credentials.
$ kubectl get secret example-infinispan-generated-secret \ -o jsonpath="{.data.identities\.yaml}" | base64 --decode credentials: - username: developer password: dIRs5cAAsHIeeRIL - username: operator password: uMBo9CmEdEduYk24
7.1.3. Adding Custom Credentials
Configure access to Infinispan cluster endpoints with custom credentials.
-
Create an
identities.yaml
file with the credentials that you want to add.credentials: - username: testuser password: testpassword - username: operator password: supersecretoperatorpassword
identities.yaml
must include the operator user. -
Create an authentication secret from
identities.yaml
.$ kubectl create secret generic --from-file=identities.yaml connect-secret
-
Specify the authentication secret with
spec.security.endpointSecretName
in yourInfinispan
CR and then apply the changes.spec: ... security: endpointSecretName: connect-secret (1)
1 Specifies the name of the authentication secret that contains your credentials.
Modifying spec.security.endpointSecretName
triggers a cluster restart. You
can watch the Infinispan cluster as Infinispan Operator applies changes:
$ kubectl get pods -w
7.2. Configuring Encryption
Encrypt connections between clients and Infinispan nodes with Red Hat OpenShift service certificates or custom TLS certificates.
7.2.1. Encryption with Red Hat OpenShift Service Certificates
Infinispan Operator automatically generates TLS certificates that are signed by the Red Hat OpenShift service CA. Infinispan Operator then stores the certificates and keys in a secret so you can retrieve them and use with remote clients.
If the Red Hat OpenShift service CA is available, Infinispan Operator adds the following
spec.security.endpointEncryption
configuration to the Infinispan
CR:
spec:
...
security:
endpointEncryption:
type: Service
certServiceName: service.beta.openshift.io (1)
certSecretName: example-infinispan-cert-secret (2)
1 | Specifies the Red Hat OpenShift Service. |
2 | Names the secret that contains a service certificate, tls.crt , and key, tls.key , in PEM format. If you do not specify a name, Infinispan Operator uses <cluster_name>-cert-secret . |
Service certificates use the internal DNS name of the Infinispan cluster as the common name (CN), for example:
For this reason, service certificates can be fully trusted only inside OpenShift. If you want to encrypt connections with clients running outside OpenShift, you should use custom TLS certificates. Service certificates are valid for one year and are automatically replaced before they expire. |
7.2.2. Retrieving TLS Certificates
Get TLS certificates from encryption secrets to create client trust stores.
-
Retrieve
tls.crt
from encryption secrets as follows:
$ kubectl get secret example-infinispan-cert-secret \
-o jsonpath='{.data.tls\.crt}' | base64 --decode > tls.crt
7.2.3. Disabling Encryption
You can disable encryption so clients do not need TLS certificates to establish connections with Infinispan.
Infinispan does not recommend disabling encryption in production environments where endpoints are exposed outside the Kubernetes cluster via |
-
Set
None
as the value for thespec.security.endpointEncryption.type
field in yourInfinispan
CR and then apply the changes.spec: ... security: endpointEncryption: type: None (1)
1 Disables encryption for Infinispan endpoints.
7.2.4. Using Custom TLS Certificates
Use custom PKCS12 keystore or TLS certificate/key pairs to encrypt connections between clients and Infinispan clusters.
-
Create either a keystore or certificate secret. See:
-
Add the encryption secret to your OpenShift namespace, for example:
$ kubectl apply -f tls_secret.yaml
-
Specify the encryption secret with
spec.security.endpointEncryption
in yourInfinispan
CR and then apply the changes.spec: ... security: endpointEncryption: (1) type: Secret (2) certSecretName: tls-secret (3)
1 Encrypts traffic to and from Infinispan endpoints. 2 Configures Infinispan to use secrets that contain encryption certificates. 3 Names the encryption secret.
Certificate Secrets
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
type: Opaque
data:
tls.key: "LS0tLS1CRUdJTiBQUk ..." (1)
tls.crt: "LS0tLS1CRUdJTiBDRVl ..." (2)
1 | Adds a base64-encoded TLS key. |
2 | Adds a base64-encoded TLS certificate. |
Keystore Secrets
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
type: Opaque
stringData:
alias: server (1)
password: password (2)
data:
keystore.p12: "MIIKDgIBAzCCCdQGCSqGSIb3DQEHA..." (3)
1 | Specifies an alias for the keystore. |
2 | Specifies a password for the keystore. |
3 | Adds a base64-encoded keystore. |
8. Configuring Cross-Site Replication
Set up global Infinispan clusters to back up data across sites.
8.1. Cross-Site Replication with Infinispan Operator
If you have Infinispan clusters running in separate locations, use Infinispan Operator to connect them so you can back up data across sites.
For example, in the following illustration, Infinispan Operator manages a Infinispan cluster at a data center in New York City, NYC. At another data center in London, LON, Infinispan Operator also manages a Infinispan cluster.
Infinispan Operator uses a Kubernetes API to establish a secure connection between the OpenShift Container Platform clusters in NYC and LON. Infinispan Operator then creates a cross-site replication service so Infinispan clusters can back up data across locations.
Each Infinispan cluster has one site master node that coordinates all backup requests. Infinispan Operator identifies the site master node so that all traffic through the cross-site replication service goes to the site master.
If the current site master node goes offline then a new node becomes site master. Infinispan Operator automatically finds the new site master node and updates the cross-site replication service to forward backup requests to it.
8.2. Applying Cluster Roles for Cross-Site Replication
During OLM installation, Infinispan Operator sets up cluster roles required for cross-site replication. If you install Infinispan Operator manually, you must complete this procedure to set up those cluster roles.
-
Install
clusterrole.yaml
andclusterrole_binding.yaml
as follows:
$ kubectl apply -f deploy/clusterrole.yaml
$ kubectl apply -f deploy/clusterrole_binding.yaml
8.3. Creating Minikube Site Access Secrets
If you run Infinispan Operator in Minikube, you should create secrets that contain the files that allow different instances of Minikube to authenticate with each other.
-
Create secrets on each site that contain
ca.crt
,client.crt
, andclient.key
from your Minikube installation.For example, do the following on LON:
kubectl create secret generic site-a-secret \ --from-file=certificate-authority=/opt/minikube/.minikube/ca.crt \ --from-file=client-certificate=/opt/minikube/.minikube/client.crt \ --from-file=client-key=/opt/minikube/.minikube/client.key
8.4. Creating Service Account Tokens
Generate service account tokens on each OpenShift cluster that acts as a backup location. Clusters use these tokens to authenticate with each other so Infinispan Operator can create a cross-site replication service.
-
Log in to an OpenShift cluster.
-
Create a service account.
For example, create a service account at LON:
$ kubectl create sa lon serviceaccount/lon created
-
Add the view role to the service account with the following command:
$ oc policy add-role-to-user view system:serviceaccount:<namespace>:lon
-
Repeat the preceding steps on your other OpenShift clusters.
8.5. Exchanging Service Account Tokens
After you create service account tokens on your OpenShift clusters, you add them to secrets on each backup location. For example, at LON you add the service account token for NYC. At NYC you add the service account token for LON.
-
Get tokens from each service account.
Use the following command or get the token from the OpenShift Web Console:
$ oc sa get-token lon eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9...
-
Log in to an OpenShift cluster.
-
Add the service account token for a backup location with the following command:
$ oc create secret generic <token-name> --from-literal=token=<token>
For example, log in to the OpenShift cluster at NYC and create a
lon-token
secret as follows:$ oc create secret generic lon-token --from-literal=token=eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9...
-
Repeat the preceding steps on your other OpenShift clusters.
8.6. Configuring Infinispan Clusters for Cross-Site Replication
Configure Infinispan clusters as backup locations so that they can communicate over a dedicated JGroups transport channel for replicating data.
-
Create secrets that contain service account tokens for each backup location.
-
Ensure that all clusters are Data Grid Service nodes.
-
Ensure that Kubernetes project names match.
To perform cross-site replication, Infinispan Operator requires Infinispan clusters to have the same name and run in matching namespaces.
For example, you create a cluster at LON in a project named
xsite-cluster
. The cluster at NYC must also run in a project namedxsite-cluster
.
-
Create an
Infinispan
CR for each Infinispan cluster. -
Specify a matching name for each Infinispan cluster with
metadata.name
. -
Specify the name of the local site with
spec.service.sites.local.name
. -
Set the expose service type for the local site with
spec.service.sites.local.expose.type
. -
Provide the name, URL, and secret for each Infinispan cluster that acts as a backup location with
spec.service.sites.locations
.The following are example
Infinispan
CR definitions for LON and NYC:-
LON
apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: example-infinispan spec: replicas: 3 service: type: DataGrid sites: local: name: LON expose: type: LoadBalancer locations: - name: LON url: openshift://api.rhdg-lon.openshift-aws.myhost.com:6443 secretName: lon-token - name: NYC url: openshift://api.rhdg-nyc.openshift-aws.myhost.com:6443 secretName: nyc-token
-
NYC
apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: example-infinispan spec: replicas: 2 service: type: DataGrid sites: local: name: NYC expose: type: LoadBalancer locations: - name: NYC url: openshift://api.rhdg-nyc.openshift-aws.myhost.com:6443 secretName: nyc-token - name: LON url: openshift://api.rhdg-lon.openshift-aws.myhost.com:6443 secretName: lon-token
-
-
Adjust logging levels for cross-site replication as follows:
... logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: fatal
The preceding configuration decreases logging for JGroups TCP and RELAY2 protocols to reduce excessive messages about cluster backup operations, which can result in a large number of log files that use container storage.
-
Configure nodes with any other Data Grid Service resources.
-
Apply the
Infinispan
CRs. -
Check node logs to verify that Infinispan clusters form a cross-site view, for example:
$ kubectl logs example-infinispan-0 | grep x-site INFO [org.infinispan.XSITE] (jgroups-5,example-infinispan-0-<id>) ISPN000439: Received new x-site view: [NYC] INFO [org.infinispan.XSITE] (jgroups-7,example-infinispan-0-<id>) ISPN000439: Received new x-site view: [NYC, LON]
If your clusters have formed a cross-site view, you can start adding backup locations to caches.
8.6.1. Cross-Site Replication Resources
spec:
...
service:
type: DataGrid (1)
sites:
local:
name: LON (2)
expose:
type: LoadBalancer (3)
locations: (4)
- name: LON (5)
url: openshift://api.site-a.devcluster.openshift.com:6443 (6)
secretName: lon-token (7)
- name: NYC
url: openshift://api.site-b.devcluster.openshift.com:6443
secretName: nyc-token
logging:
categories:
org.jgroups.protocols.TCP: error (8)
org.jgroups.protocols.relay.RELAY2: fatal (9)
1 | Specifies Data Grid Service. Infinispan supports cross-site replication with Data Grid Service clusters only. | ||
2 | Names the local site for a Infinispan cluster. | ||
3 | Defines the externally exposed service.
|
||
4 | Provides connection information for all backup locations. | ||
5 | Specifies a backup location that matches spec.service.sites.local.name . |
||
6 | Specifies a backup location.
|
||
7 | Specifies the access secret for a site.
|
||
8 | Logs error messages for the JGroups TCP protocol. | ||
9 | Logs fatal messages for the JGroups RELAY2 protocol. |
9. Creating Caches with Infinispan Operator
Use Cache
CRs to add cache configuration with Infinispan Operator and control how Infinispan stores your data.
The |
When using Cache
CRs, the following rules apply:
-
Cache
CRs apply to Data Grid Service nodes only. -
You can create a single cache for each
Cache
CR. -
If your
Cache
CR contains both a template and an XML configuration, Infinispan Operator uses the template. -
If you edit caches in the OpenShift Web Console, the changes are reflected through the user interface but do not take effect on the Infinispan cluster. You cannot edit caches. To change cache configuration, you must first delete the cache through the console or CLI and then re-create the cache.
-
Deleting
Cache
CRs in the OpenShift Web Console does not remove caches from Infinispan clusters. You must delete caches through the console or CLI.
9.1. Adding Credentials for Infinispan Operator
Infinispan Operator must authenticate with Data Grid Service clusters to create caches. You add credentials to a secret so that Infinispan Operator can access your cluster when creating caches.
The following procedure explains how to add credentials to a new secret. If you already have a custom secret that contains credentials, you can use that instead of creating a new one.
-
Define a Secret object type that provides valid user credentials for accessing Data Grid Service clusters in a
StringData
map.For example, create an
basic-auth.yaml
file that provides credentials for thedeveloper
user as follows:apiVersion: v1 stringData: username: developer (1) password: G8ZdJvSaY3lOOwfM (2) kind: Secret metadata: name: basic-auth (3) type: Opaque
1 Names a user that can create caches. 2 Specifies the password that corresponds to the user. 3 Specifies a name for the secret. -
Create a secret from the file, as in the following example:
$ kubectl apply -f basic-auth.yaml
9.1.1. Using Custom Credentials Secrets
Infinispan Operator requires that credentials exist as values for the username
and password
keys in a secret. If you have a custom secret that contains
Infinispan credentials, but uses different key names, you can override those
names in your Cache
CR.
For example, you have a secret named "my-credentials" that holds a list of Infinispan users and their passwords as follows:
stringData:
app_user1: spock
app_user1_pw: G8ZdJvSaY3lOOwfM
app_user2: jim
app_user2_pw: zTzz2gVyyF4JsYsH
-
In your
Cache
CR, override custom key names withusername
andpassword
as follows:
spec:
adminAuth:
username:
key: app_user1 (1)
name: my-credentials (2)
password:
key: app_user1_pw (3)
name: my-credentials
1 | Overrides the app_user1 key name with username . |
2 | Specifies the name of your custom credentials secret. |
3 | Overrides the app_user1_pw key name with password . |
9.2. Creating Infinispan Caches from XML
Complete the following steps to create caches on Data Grid Service clusters
using valid infinispan.xml
cache definitions.
-
Create a secret that contains valid user credentials for accessing Infinispan clusters.
-
Create a
Cache
CR that contains the XML cache definition you want to create.apiVersion: infinispan.org/v2alpha1 kind: Cache metadata: name: mycachedefinition (1) spec: adminAuth: (2) secretName: basic-auth clusterName: example-infinispan (3) name: mycache (4) template: <infinispan><cache-container><distributed-cache name="mycache" mode="SYNC"><persistence><file-store/></persistence></distributed-cache></cache-container></infinispan> (5)
1 Names the Cache
CR.2 Specifies a secret that provides credentials with username
andpassword
keys or an override for custom credentials secrets.3 Specifies the name of the target Infinispan cluster where you want Infinispan Operator to create the cache. 4 Names the cache on the Infinispan cluster. 5 Specifies the XML cache definition to create the cache. Note that the name
attribute is ignored. Onlyspec.name
applies to the resulting cache. -
Apply the
Cache
CR, for example:$ kubectl apply -f mycache.yaml cache.infinispan.org/mycachedefinition created
9.3. Creating Infinispan Caches from Templates
Complete the following steps to create caches on Data Grid Service clusters using cache configuration templates.
-
Create a secret that contains valid user credentials for accessing Infinispan clusters.
-
Identify the cache configuration template you want to use for your cache. You can find a list of available configuration templates in Infinispan Console.
-
Create a
Cache
CR that specifies the name of the template you want to use.For example, the following CR creates a cache named "mycache" that uses the
org.infinispan.DIST_SYNC
cache configuration template:apiVersion: infinispan.org/v2alpha1 kind: Cache metadata: name: mycachedefinition (1) spec: adminAuth: (2) secretName: basic-auth clusterName: example-infinispan (3) name: mycache (4) templateName: org.infinispan.DIST_SYNC (5)
1 Names the Cache
CR.2 Specifies a secret that provides credentials with username
andpassword
keys or an override for custom credentials secrets.3 Specifies the name of the target Infinispan cluster where you want Infinispan Operator to create the cache. 4 Names the Infinispan cache instance. 5 Specifies the infinispan.org
cache configuration template to create the cache. -
Apply the
Cache
CR, for example:$ kubectl apply -f mycache.yaml cache.infinispan.org/mycachedefinition created
9.4. Adding Backup Locations to Caches
When you configure Infinispan clusters to perform cross-site replication, you can add backup locations to your cache configurations.
-
Create cache configurations with identical names for each site.
Cache configurations at each site can use different cache modes and backup strategies. Infinispan replicates data based on cache names.
-
Configure backup locations to go offline automatically with the
take-offline
element.-
Set the amount of time, in milliseconds, before backup locations go offline with the
min-wait
attribute.
-
-
Define any other valid cache configuration.
-
Add backup locations to the named cache on all sites in the global cluster.
For example, if you add LON as a backup for NYC you should add NYC as a backup for LON.
The following configuration examples show backup locations for caches:
-
NYC
<infinispan> <cache-container> <distributed-cache name="customers"> <encoding media-type="application/x-protostream"/> <backups> <backup site="LON" strategy="SYNC"> <take-offline min-wait="120000"/> </backup> </backups> </distributed-cache> </cache-container> </infinispan>
-
LON
<infinispan> <cache-container> <replicated-cache name="customers"> <encoding media-type="application/x-protostream"/> <backups> <backup site="NYC" strategy="ASYNC" > <take-offline min-wait="120000"/> </backup> </backups> </replicated-cache> </cache-container> </infinispan>
9.4.1. Performance Considerations with Taking Backup Locations Offline
Backup locations can automatically go offline when remote sites become unavailable. This prevents nodes from attempting to replicate data to offline backup locations, which can have a performance impact on your cluster because it results in error.
You can configure how long to wait before backup locations go offline. A good rule of thumb is one or two minutes. However, you should test different wait periods and evaluate their performance impacts to determine the correct value for your deployment.
For instance when OpenShift terminates the site master pod, that backup location becomes unavailable for a short period of time until Infinispan Operator elects a new site master. In this case, if the minimum wait time is not long enough then the backup locations go offline. You then need to bring those backup locations online and perform state transfer operations to ensure the data is in sync.
Likewise, if the minimum wait time is too long, node CPU usage increases from failed backup attempts which can lead to performance degradation.
9.5. Adding Persistent Cache Stores
You can add Single File cache stores to Data Grid Service nodes to save data to the persistent volume.
You configure cache stores as part of your Infinispan cache definition with
the persistence
element as follows:
<persistence>
<file-store/>
</persistence>
Infinispan then creates a Single File cache store, .dat
file, in the
/opt/infinispan/server/data
directory.
-
Add a cache store to your cache configurations as follows:
<infinispan> <cache-container> <distributed-cache name="customers" mode="SYNC"> <encoding media-type="application/x-protostream"/> <persistence> <file-store/> </persistence> </distributed-cache> </cache-container> </infinispan>
10. Establishing Remote Client Connections
Connect to Infinispan clusters from the Infinispan Console, Command Line Interface (CLI), and remote clients.
10.1. Client Connection Details
Before you can connect to Infinispan, you need to retrieve the following pieces of information:
-
Service hostname
-
Port
-
Authentication credentials
-
TLS certificate, if you use encryption
The service hostname depends on how you expose Infinispan on the network or if your clients are running on Kubernetes.
For clients running on Kubernetes, you can use the name of the internal service that Infinispan Operator creates.
For clients running outside Kubernetes, the service hostname is the location URL if you use a load balancer. For a node port service, the service hostname is the node host name. For a route, the service hostname is either a custom hostname or a system-defined hostname.
Client connections on Kubernetes and through load balancers use port 11222
.
Node port services use a port in the range of 30000
to 60000
.
Routes use either port 80
(unencrypted) or 443
(encrypted).
10.2. Creating Infinispan Caches
To create caches when running Infinispan on Kubernetes, you can:
-
Use
Cache
CR. -
Create multiple caches at a time with Infinispan CLI if you do not use
Cache
CR. -
Access Infinispan Console and create caches in XML or JSON format as an alternative to
Cache
CR or Infinispan CLI. -
Use Hot Rod clients to create caches either programmatically or through per cache properties only if required.
10.3. Connecting with the Infinispan CLI
Use the command line interface (CLI) to connect to your Infinispan cluster and perform administrative operations.
The CLI is available as part of the server distribution, which you can run on your local host to establish remote connections to Infinispan clusters on OpenShift.
Alternatively, you can use the infinispan/cli image at https://github.com/infinispan/infinispan-images.
It is possible to open a remote shell to a Infinispan node and access the CLI.
However using the CLI in this way consumes memory allocated to the container, which can lead to out of memory exceptions. |
10.3.1. Creating Caches with Infinispan CLI
Add caches to your Infinispan cluster with the CLI.
-
Download the server distribution so you can run the CLI.
-
Retrieve the necessary client connection details.
-
Create a file with a cache configuration in XML or JSON format, for example:
cat > infinispan.xml<<EOF <infinispan> <cache-container> <distributed-cache name="mycache"> <encoding> <key media-type="application/x-protostream"/> <value media-type="application/x-protostream"/> </encoding> </distributed-cache> </cache-container> </infinispan> EOF
-
Create a CLI connection to your Infinispan cluster.
$ bin/cli.sh -c https://$SERVICE_HOSTNAME:$PORT --trustall
Replace
$SERVICE_HOSTNAME:$PORT
with the hostname and port where Infinispan is available on the network. -
Enter your Infinispan credentials when prompted.
-
Add the cache with the
create cache
command and the--file
option.[//containers/default]> create cache --file=infinispan.xml mycache
-
Verify the cache exists with the
ls
command.[//containers/default]> ls caches mycache
-
Optionally retrieve the cache configuration with the
describe
command.[//containers/default]> describe caches/mycache
10.3.2. Creating Caches in Batches
Add multiple caches with batch operations with the Infinispan CLI.
-
Download the server distribution so you can run the CLI.
-
Retrieve the necessary client connection details.
-
Create at least one file with a cache configuration in XML or JSON format.
-
Create a batch file, for example:
cat > caches.batch<<EOF echo "connecting" connect --username=developer --password=dIRs5cAAsHIeeRIL echo "creating caches..." create cache firstcache --file=infinispan-one.xml create cache secondcache --file=infinispan-two.xml create cache thirdcache --file=infinispan-three.xml create cache fourthcache --file=infinispan-four.xml echo "verifying caches" ls caches EOF
-
Create the caches with the CLI.
$ bin/cli.sh -c https://$SERVICE_HOSTNAME:$PORT --trustall -f /tmp/caches.batch
Replace
$SERVICE_HOSTNAME:$PORT
with the hostname and port where Infinispan is available on the network.
10.4. Accessing Infinispan Console
Access the console to create caches, perform adminstrative operations, and monitor your Infinispan clusters.
-
Expose Infinispan on the network so you can access the console through a browser.
For example, configure a load balancer service or create a route.
-
Access the console from any browser at
$SERVICE_HOSTNAME:$PORT
.Replace
$SERVICE_HOSTNAME:$PORT
with the hostname and port where Infinispan is available on the network. -
Enter your Infinispan credentials when prompted.
10.5. Hot Rod Clients
Hot Rod is a binary TCP protocol that Infinispan provides for high-performance data transfer capabilities with remote clients.
Client intelligence refers to mechanisms the Hot Rod protocol provides so that clients can locate and send requests to Infinispan nodes.
Hot Rod clients running on Kubernetes can access internal IP addresses for Infinispan nodes so you can use any client intelligence.
The default intelligence, HASH_DISTRIBUTION_AWARE
, is recommended because it allows clients to route requests to primary owners, which improves performance.
Hot Rod clients running outside Kubernetes must use BASIC
intelligence.
10.5.1. Hot Rod Configuration API
You can programmatically configure Hot Rod client connections with the ConfigurationBuilder
interface.
|
On Kubernetes
Hot Rod clients running on Kubernetes can use the following configuration:
import org.infinispan.client.hotrod.configuration.ConfigurationBuilder;
import org.infinispan.client.hotrod.configuration.SaslQop;
import org.infinispan.client.hotrod.impl.ConfigurationProperties;
...
ConfigurationBuilder builder = new ConfigurationBuilder();
builder.addServer()
.host("$SERVICE_HOSTNAME")
.port(ConfigurationProperties.DEFAULT_HOTROD_PORT)
.security().authentication()
.username("username")
.password("password")
.realm("default")
.saslQop(SaslQop.AUTH)
.saslMechanism("SCRAM-SHA-512")
.ssl()
.sniHostName("$SERVICE_HOSTNAME")
.trustStorePath("/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt");
Outside Kubernetes
Hot Rod clients running outside Kubernetes can use the following configuration:
import org.infinispan.client.hotrod.configuration.ClientIntelligence;
import org.infinispan.client.hotrod.configuration.ConfigurationBuilder;
import org.infinispan.client.hotrod.configuration.SaslQop;
...
ConfigurationBuilder builder = new ConfigurationBuilder();
builder.addServer()
.host("$SERVICE_HOSTNAME")
.port("$PORT")
.security().authentication()
.username("username")
.password("password")
.realm("default")
.saslQop(SaslQop.AUTH)
.saslMechanism("SCRAM-SHA-512")
.ssl()
.sniHostName("$SERVICE_HOSTNAME")
.trustStorePath("/path/to/tls.crt");
builder.clientIntelligence(ClientIntelligence.BASIC);
10.5.2. Hot Rod Client Properties
You can configure Hot Rod client connections with the hotrod-client.properties
file on the application classpath.
|
On Kubernetes
Hot Rod clients running on Kubernetes can use the following properties:
# Connection
infinispan.client.hotrod.server_list=$SERVICE_HOSTNAME:$PORT
# Authentication
infinispan.client.hotrod.use_auth=true
infinispan.client.hotrod.auth_username=developer
infinispan.client.hotrod.auth_password=$PASSWORD
infinispan.client.hotrod.auth_server_name=$CLUSTER_NAME
infinispan.client.hotrod.sasl_properties.javax.security.sasl.qop=auth
infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512
# Encryption
infinispan.client.hotrod.sni_host_name=$SERVICE_HOSTNAME
# Path to the TLS certificate.
# Clients automatically generate trust stores from certificates.
infinispan.client.hotrod.trust_store_path=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
Outside Kubernetes
Hot Rod clients running outside Kubernetes can use the following properties:
# Connection
infinispan.client.hotrod.server_list=$SERVICE_HOSTNAME:$PORT
# Client intelligence
infinispan.client.hotrod.client_intelligence=BASIC
# Authentication
infinispan.client.hotrod.use_auth=true
infinispan.client.hotrod.auth_username=developer
infinispan.client.hotrod.auth_password=$PASSWORD
infinispan.client.hotrod.auth_server_name=$CLUSTER_NAME
infinispan.client.hotrod.sasl_properties.javax.security.sasl.qop=auth
infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512
# Encryption
infinispan.client.hotrod.sni_host_name=$SERVICE_HOSTNAME
# Path to the TLS certificate.
# Clients automatically generate trust stores from certificates.
infinispan.client.hotrod.trust_store_path=tls.crt
10.5.3. Creating Caches with Hot Rod Clients
You can remotely create caches on Infinispan clusters running on Kubernetes with Hot Rod clients.
However, Infinispan recommends that you create caches using Infinispan Console, the CLI, or with Cache
CRs instead of with Hot Rod clients.
Programmatically creating caches
The following example shows how to add cache configurations to the ConfigurationBuilder
and then create them with the RemoteCacheManager
:
import org.infinispan.client.hotrod.DefaultTemplate;
import org.infinispan.client.hotrod.RemoteCache;
import org.infinispan.client.hotrod.RemoteCacheManager;
...
builder.remoteCache("my-cache")
.templateName(DefaultTemplate.DIST_SYNC);
builder.remoteCache("another-cache")
.configuration("<infinispan><cache-container><distributed-cache name=\"another-cache\"><encoding media-type=\"application/x-protostream\"/></distributed-cache></cache-container></infinispan>");
try (RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build())) {
// Get a remote cache that does not exist.
// Rather than return null, create the cache from a template.
RemoteCache<String, String> cache = cacheManager.getCache("my-cache");
// Store a value.
cache.put("hello", "world");
// Retrieve the value and print it.
System.out.printf("key = %s\n", cache.get("hello"));
This example shows how to create a cache named CacheWithXMLConfiguration using the XMLStringConfiguration()
method to pass the cache configuration as XML:
import org.infinispan.client.hotrod.RemoteCacheManager;
import org.infinispan.commons.configuration.XMLStringConfiguration;
...
private void createCacheWithXMLConfiguration() {
String cacheName = "CacheWithXMLConfiguration";
String xml = String.format("<infinispan>" +
"<cache-container>" +
"<distributed-cache name=\"%s\" mode=\"SYNC\">" +
"<encoding media-type=\"application/x-protostream\"/>" +
"<locking isolation=\"READ_COMMITTED\"/>" +
"<transaction mode=\"NON_XA\"/>" +
"<expiration lifespan=\"60000\" interval=\"20000\"/>" +
"</distributed-cache>" +
"</cache-container>" +
"</infinispan>"
, cacheName);
manager.administration().getOrCreateCache(cacheName, new XMLStringConfiguration(xml));
System.out.println("Cache with configuration exists or is created.");
}
Using Hot Rod client properties
When you invoke cacheManager.getCache()
calls for named caches that do not exist, Infinispan creates them from the Hot Rod client properties instead of returning null.
Add cache configuration to Hot Rod client properties as in the following example:
# Add cache configuration
infinispan.client.hotrod.cache.my-cache.template_name=org.infinispan.DIST_SYNC
infinispan.client.hotrod.cache.another-cache.configuration=<infinispan><cache-container><distributed-cache name=\"another-cache\"/></cache-container></infinispan>
infinispan.client.hotrod.cache.my-other-cache.configuration_uri=file:/path/to/configuration.xml
10.6. Accessing the REST API
Infinispan provides a RESTful interface that you can interact with using HTTP clients.
-
Expose Infinispan on the network so you can access the REST API.
For example, configure a load balancer service or create a route.
-
Access the REST API with any HTTP client at
$SERVICE_HOSTNAME:$PORT/rest/v2
.Replace
$SERVICE_HOSTNAME:$PORT
with the hostname and port where Infinispan is available on the network.
10.7. Adding Caches to Cache Service Nodes
Cache Service nodes include a default cache configuration with recommended settings. This default cache lets you start using Infinispan without the need to create caches.
Because the default cache provides recommended settings, you should create caches only as copies of the default. If you want multiple custom caches you should create Data Grid Service nodes instead of Cache Service nodes. |
-
Access the Infinispan Console and provide a copy of the default configuration in XML or JSON format.
-
Use the Infinispan CLI to create a copy from the default cache as follows:
[//containers/default]> create cache --template=default mycache
10.7.1. Default Cache Configuration
The default cache for Cache Service nodes is as follows:
<infinispan>
<cache-container>
<distributed-cache name="default" (1)
mode="SYNC" (2)
owners="2"> (3)
<memory storage="OFF_HEAP" (4)
max-size="<maximum_size_in_bytes>" (5)
when-full="REMOVE" /> (6)
<partition-handling when-split="ALLOW_READ_WRITES" (7)
merge-policy="REMOVE_ALL"/> (8)
</distributed-cache>
</cache-container>
</infinispan>
1 | Names the cache instance as "default". |
2 | Uses synchronous distribution for storing data across the cluster. |
3 | Configures two replicas of each cache entry on the cluster. |
4 | Stores cache entries as bytes in native memory (off-heap). |
5 | Defines the maximum size for the data container in bytes. Infinispan Operator calculates the maximum size when it creates nodes. |
6 | Evicts cache entries to control the size of the data container. You can enable automatic scaling so that Infinispan Operator adds nodes when memory usage increases instead of removing entries. |
7 | Names a conflict resolution strategy that allows read and write operations for cache entries, even if segment owners are in different partitions. |
8 | Specifies a merge policy that removes entries from the cache when Infinispan detects conflicts. |
11. Monitoring Infinispan with Prometheus
Infinispan exposes a metrics endpoint that provides statistics and events to Prometheus.
11.1. Creating a Prometheus Service Monitor
Define a service monitor instances that configures Prometheus to monitor your Infinispan cluster.
-
Set up a Prometheus stack on your Kubernetes cluster.
-
Create an authentication secret that contains Infinispan credentials so that Prometheus can authenticate with your Infinispan cluster.
apiVersion: v1 stringData: username: developer (1) password: dIRs5cAAsHIeeRIL (2) kind: Secret metadata: name: basic-auth type: Opaque
1 Specifies an application user. developer
is the default.2 Specifies the corresponding password. -
Add the authentication secret to your Prometheus namespace.
$ kubectl apply -f basic-auth.yaml
-
Create a service monitor that configures Prometheus to monitor your Infinispan cluster.
apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: prometheus name: datagrid-monitoring (1) namespace: infinispan-monitoring (2) spec: endpoints: - targetPort: 11222 (3) path: /metrics (4) honorLabels: true basicAuth: username: key: username name: basic-auth (5) password: key: password name: basic-auth interval: 30s scrapeTimeout: 10s scheme: https (6) tlsConfig: insecureSkipVerify: true serverName: example-infinispan (7) namespaceSelector: matchNames: - infinispan (8) selector: matchLabels: app: infinispan-service clusterName: example-infinispan (9)
1 Names the service monitor instances. 2 Specifies the namespace of your Prometheus stack. 3 Sets the port of 11222
for the Infinispan metrics endpoint.4 Sets the path where Infinispan exposes metrics. 5 Specifies the authentication secret with Infinispan credentials. 6 Specifies that Infinispan clusters use endpoint encryption. If you do not use endpoint encryption, remove spec.endpoints.scheme
.7 Specifies the Common Name (CN) of the TLS certificate for Infinispan encryption. If you use an OpenShift service certificate, the CN matches the metadata.name
resource for your Infinispan cluster. If you do not use endpoint encryption, removespec.endpoints.tlsConfig
.8 Specifies the namespace of your Infinispan cluster. 9 Specifies the name of your Infinispan cluster. -
Add the service monitor instance to your Prometheus namespace.
$ kubectl apply -f service-monitor.yaml
12. Guaranteeing Availability with Anti-Affinity
Kubernetes includes anti-affinity capabilities that protect workloads from single points of failure.
12.1. Anti-Affinity Strategies
Each Infinispan node in a cluster runs in a pod that runs on an Kubernetes node in a cluster. Each Red Hat OpenShift node runs on a physical host system. Anti-affinity works by distributing Infinispan nodes across Kubernetes nodes, ensuring that your Infinispan clusters remain available even if hardware failures occur.
Infinispan Operator offers two anti-affinity strategies:
kubernetes.io/hostname
-
Infinispan replica pods are scheduled on different Kubernetes nodes.
topology.kubernetes.io/zone
-
Infinispan replica pods are scheduled across multiple zones.
Fault tolerance
Anti-affinity strategies guarantee cluster availability in different ways.
The equations in the following section apply only if the number of Kubernetes nodes or zones is greater than the number of Infinispan nodes. |
Provides tolerance of x
node failures for the following types of cache:
-
Replicated:
x = spec.replicas - 1
-
Distributed:
x = num_owners - 1
Provides tolerance of x
zone failures when x
zones exist for the following types of cache:
-
Replicated:
x = spec.replicas - 1
-
Distributed:
x = num_owners - 1
|
12.2. Configuring Anti-Affinity
Specify where Kubernetes schedules pods for your Infinispan clusters to ensure availability.
-
Add the
spec.affinity
block to yourInfinispan
CR. -
Configure anti-affinity strategies as necessary.
-
Apply your
Infinispan
CR.
12.3. Anti-Affinity Strategy Configurations
Configure anti-affinity strategies in your Infinispan
CR to control where Kubernetes schedules Infinispan replica pods.
Schedule pods on different Kubernetes nodes
The following is the anti-affinity strategy that Infinispan Operator uses if you do not configure the spec.affinity
field in your Infinispan
CR:
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100 (1)
podAffinityTerm:
labelSelector:
matchLabels:
app: infinispan-pod
clusterName: <cluster_name>
infinispan_cr: <cluster_name>
topologyKey: "kubernetes.io/hostname" (2)
1 | Sets the hostname strategy as most preferred. |
2 | Schedules Infinispan replica pods on different Kubernetes nodes. |
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution: (1)
- labelSelector:
matchLabels:
app: infinispan-pod
clusterName: <cluster_name>
infinispan_cr: <cluster_name>
topologyKey: "topology.kubernetes.io/hostname"
1 | Kubernetes does not schedule Infinispan pods if there are no different nodes available. |
To ensure that you can schedule Infinispan replica pods on different Kubernetes nodes, the number of Kubernetes nodes available must be greater than the value of |
Schedule pods across multiple Kubernetes zones
The following example prefers multiple zones when scheduling pods:
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100 (1)
podAffinityTerm:
labelSelector:
matchLabels:
app: infinispan-pod
clusterName: <cluster_name>
infinispan_cr: <cluster_name>
topologyKey: "topology.kubernetes.io/zone" (2)
- weight: 90 (3)
podAffinityTerm:
labelSelector:
matchLabels:
app: infinispan-pod
clusterName: <cluster_name>
infinispan_cr: <cluster_name>
topologyKey: "kubernetes.io/hostname" (4)
1 | Sets the zone strategy as most preferred. |
2 | Schedules Infinispan replica pods across multiple zones. |
3 | Sets the hostname strategy as next preferred. |
4 | Schedules Infinispan replica pods on different Kubernetes nodes if it is not possible to schedule across zones. |
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution: (1)
- labelSelector:
matchLabels:
app: infinispan-pod
clusterName: <cluster_name>
infinispan_cr: <cluster_name>
topologyKey: "topology.kubernetes.io/zone"
1 | Uses the zone strategy only when scheduling Infinispan replica pods. |
13. Monitoring Infinispan Logs
Set logging categories to different message levels to monitor, debug, and troubleshoot Infinispan clusters.
13.1. Configuring Infinispan Logging
-
Specify logging configuration with
spec.logging
in yourInfinispan
CR and then apply the changes.spec: ... logging: (1) categories: (2) org.infinispan: debug (3) org.jgroups: debug
1 Configures Infinispan logging. 2 Adds logging categories. 3 Names logging categories and levels. The root logging category is
org.infinispan
and isINFO
by default. -
Retrieve logs from Infinispan nodes as required.
$ kubectl logs -f $POD_NAME
13.2. Log Levels
Log levels indicate the nature and severity of messages.
Log level | Description |
---|---|
trace |
Provides detailed information about running state of applications. This is the most verbose log level. |
debug |
Indicates the progress of individual requests or activities. |
info |
Indicates overall progress of applications, including lifecycle events. |
warn |
Indicates circumstances that can lead to error or degrade performance. |
error |
Indicates error conditions that might prevent operations or activities from being successful but do not prevent applications from running. |
14. Reference
Find information about Infinispan services and clusters that you create with Infinispan Operator.
14.1. Image Resource
spec:
image: infinispan/server:latest (1)
1 | Lets you specify a Infinispan image to use. |
14.2. Network Services
-
Allow Infinispan nodes to discover each other and form clusters.
-
Provide access to Infinispan endpoints from clients in the same Kubernetes namespace.
Service | Port | Protocol | Description |
---|---|---|---|
|
|
TCP |
Internal access to Infinispan endpoints |
|
|
TCP |
Cluster discovery |
Provides access to Infinispan endpoints from clients outside Kubernetes or in different namespaces.
You must create the external service with Infinispan Operator. It is not available by default. |
Service | Port | Protocol | Description |
---|---|---|---|
|
|
TCP |
External access to Infinispan endpoints. |
Allows Infinispan to back up data between clusters in different locations.
Service | Port | Protocol | Description |
---|---|---|---|
|
|
TCP |
JGroups RELAY2 channel for cross-site communication. |