The Infinispan Operator provides operational intelligence and reduces management complexity for deploying Infinispan on Kubernetes and Red Hat OpenShift.

Infinispan Operator 2.1 corresponds to Infinispan 12.0.

1. Installing Infinispan Operator

Install Infinispan Operator into a Kubernetes namespace to create and manage Infinispan clusters.

1.1. Installing Infinispan Operator on Red Hat OpenShift

Create subscriptions to Infinispan Operator on OpenShift so you can install different Infinispan versions and receive automatic updates.

Automatic updates apply to Infinispan Operator first and then for each Infinispan node. Infinispan Operator updates clusters one node at a time, gracefully shutting down each node and then bringing it back online with the updated version before going on to the next node.

Prerequisites
  • Access to OperatorHub running on OpenShift. Some OpenShift environments, such as OpenShift Container Platform, can require administrator credentials.

  • Have an OpenShift project for Infinispan Operator if you plan to install it into a specific namespace.

Procedure
  1. Log in to the OpenShift Web Console.

  2. Navigate to OperatorHub.

  3. Find and select Infinispan Operator.

  4. Select Install and continue to Create Operator Subscription.

  5. Specify options for your subscription.

    Installation Mode

    You can install Infinispan Operator into a Specific namespace or All namespaces.

    Update Channel

    Subscribe to updates for Infinispan Operator versions.

    Approval Strategies

    When new Infinispan versions become available, you can install updates manually or let Infinispan Operator install them automatically.

  6. Select Subscribe to install Infinispan Operator.

  7. Navigate to Installed Operators to verify the Infinispan Operator installation.

1.2. Installing Infinispan Operator from OperatorHub.io

Use the command line to install Infinispan Operator from OperatorHub.io.

Prerequisites
  • OKD 3.11 or later.

  • Kubernetes 1.11 or later.

  • Have administrator access on the Kubernetes cluster.

  • Have a kubectl or oc client.

Procedure
  1. Navigate to the Infinispan Operator entry on OperatorHub.io.

  2. Follow the instructions to install Infinispan Operator into your Kubernetes cluster.

1.3. Building and Installing Infinispan Operator Manually

Manually build and install Infinispan Operator from the GitHub repository.

Procedure

2. Getting Started with Infinispan Operator

Infinispan Operator lets you create, configure, and manage Infinispan clusters.

Prerequisites
  • Install Infinispan Operator.

  • Have an oc or a kubectl client.

2.1. Infinispan Custom Resource (CR)

Infinispan Operator adds a new Custom Resource (CR) of type Infinispan that lets you handle Infinispan clusters as complex units on Kubernetes.

Infinispan Operator watches for Infinispan Custom Resources (CR) that you use to instantiate and configure Infinispan clusters and manage Kubernetes resources, such as StatefulSets and Services. In this way, the Infinispan CR is your primary interface to Infinispan on Kubernetes.

The minimal Infinispan CR is as follows:

apiVersion: infinispan.org/v1 (1)
kind: Infinispan (2)
metadata:
  name: example-infinispan (3)
spec:
  replicas: 2 (4)
1 Declares the Infinispan API version.
2 Declares the Infinispan CR.
3 Names the Infinispan cluster.
4 Specifies the number of nodes in the Infinispan cluster.

2.2. Creating Infinispan Clusters

Use Infinispan Operator to create clusters of two or more Infinispan nodes.

Procedure
  1. Specify the number of Infinispan nodes in the cluster with spec.replicas in your Infinispan CR.

    For example, create a cr_minimal.yaml file as follows:

    $ cat > cr_minimal.yaml<<EOF
    apiVersion: infinispan.org/v1
    kind: Infinispan
    metadata:
      name: example-infinispan
    spec:
      replicas: 2
    EOF
  2. Apply your Infinispan CR.

    $ kubectl apply -f cr_minimal.yaml
  3. Watch Infinispan Operator create the Infinispan nodes.

    $ kubectl get pods -w
    
    NAME                        READY  STATUS              RESTARTS   AGE
    example-infinispan-1        0/1    ContainerCreating   0          4s
    example-infinispan-2        0/1    ContainerCreating   0          4s
    example-infinispan-3        0/1    ContainerCreating   0          5s
    infinispan-operator-0       1/1    Running             0          3m
    example-infinispan-3        1/1    Running             0          8s
    example-infinispan-2        1/1    Running             0          8s
    example-infinispan-1        1/1    Running             0          8s
Next Steps

Try changing the value of replicas: and watching Infinispan Operator scale the cluster up or down.

2.3. Verifying Infinispan Clusters

Review log messages to ensure that Infinispan nodes receive clustered views.

Procedure
  • Do either of the following:

    • Retrieve the cluster view from logs.

      $ kubectl logs example-infinispan-0 | grep ISPN000094
      
      INFO  [org.infinispan.CLUSTER] (MSC service thread 1-2) \
      ISPN000094: Received new cluster view for channel infinispan: \
      [example-infinispan-0|0] (1) [example-infinispan-0]
      
      INFO  [org.infinispan.CLUSTER] (jgroups-3,example-infinispan-0) \
      ISPN000094: Received new cluster view for channel infinispan: \
      [example-infinispan-0|1] (2) [example-infinispan-0, example-infinispan-1]
    • Retrieve the Infinispan CR for Infinispan Operator.

      $ kubectl get infinispan -o yaml

      The response indicates that Infinispan pods have received clustered views:

      conditions:
          - message: 'View: [example-infinispan-0, example-infinispan-1]'
            status: "True"
            type: wellFormed

Use kubectl wait with the wellFormed condition for automated scripts.

$ kubectl wait --for condition=wellFormed --timeout=240s infinispan/example-infinispan

3. Setting Up Infinispan Services

Use Infinispan Operator to create clusters of either Cache Service or Data Grid Service nodes.

3.1. Service Types

Services are stateful applications, based on the Infinispan server image, that provide flexible and robust in-memory data storage.

Cache Service

Use Cache Service if you want a volatile, low-latency data store with minimal configuration. Cache Service nodes:

  • Automatically scale to meet capacity when data storage demands go up or down.

  • Synchronously distribute data to ensure consistency.

  • Replicates each entry in the cache across the cluster.

  • Store cache entries off-heap and use eviction for JVM efficiency.

  • Ensure data consistency with a default partition handling configuration.

Because Cache Service nodes are volatile you lose all data when you apply changes to the cluster with the Infinispan CR or update the Infinispan version.

Data Grid Service

Use Data Grid Service if you want to:

  • Back up data across global clusters with cross-site replication.

  • Create caches with any valid configuration.

  • Add file-based cache stores to save data in the persistent volume.

  • Use Infinispan search and other advanced capabilities.

3.2. Creating Cache Service Nodes

By default, Infinispan Operator creates Infinispan clusters with Cache Service nodes.

Procedure
  1. Create an Infinispan CR.

    apiVersion: infinispan.org/v1
    kind: Infinispan
    metadata:
      name: example-infinispan
    spec:
      replicas: 2
      service:
        type: Cache (1)
    1 Creates nodes Cache Service nodes. This is the default for the Infinispan CR.
  2. Apply your Infinispan CR to create the cluster.

3.2.1. Configuring Automatic Scaling

If you create clusters with Cache Service nodes, Infinispan Operator can automatically scale nodes up or down based on memory usage for the default cache.

Infinispan Operator monitors default caches on Cache Service nodes. As you add data to the cache, memory usage increases. When it detects that the cluster needs additional capacity, Infinispan Operator creates new nodes rather than eviciting entries. Likewise, if it detects that memory usage is below a certain threshold, Infinispan Operator shuts down nodes.

Automatic scaling works with the default cache only. If you plan to add other caches to your cluster, you should not include the autoscale field in your Infinispan CR. In this case you should use eviction to control the size of the data container on each node.

Procedure
  1. Add the spec.autoscale resource to your Infinispan CR to enable automatic scaling.

  2. Configure memory usage thresholds and number of nodes for your cluster with the autoscale field.

    spec:
      ...
      service:
        type: Cache
      autoscale:
        disabled: true (1)
        maxMemUsagePercent: 70 (2)
        maxReplicas: 5 (3)
        minMemUsagePercent: 30 (4)
        minReplicas: 2 (5)
    1 Turns automatic scaling on or off. Set a value of true to disable automatic scaling. The default value is false.
    2 Configures the maximum threshold, as a percentage, for memory usage on each node. When Infinispan Operator detects that any node in the cluster reaches the threshold, it creates a new node if possible. If Infinispan Operator cannot create a new node then it performs eviction when memory usage reaches 100 percent.
    3 Defines the maximum number of number of nodes for the cluster.
    4 Configures the minimum threshold, as a percentage, for memory usage across the cluster. When Infinispan Operator detects that memory usage falls below the minimum, it shuts down nodes.
    5 Defines the minimum number of number of nodes for the cluster.
  3. Apply the changes.

3.2.2. Configuring the Number of Owners

The number of owners controls how many copies of each cache entry are replicated across your Infinispan cluster. The default for Cache Service nodes is two, which duplicates each entry to prevent data loss.

Procedure
  1. Specify the number of owners with the spec.service.replicationFactor resource in your Infinispan CR as follows:

    spec:
      ...
      service:
        type: Cache
        replicationFactor: 3 (1)
    1 Configures three replicas for each cache entry.
  2. Apply the changes.

3.2.3. Cache Service Resources

apiVersion: infinispan.org/v1
kind: Infinispan
metadata:
  # Names the cluster.
  name: example-infinispan
spec:
  # Specifies the number of nodes in the cluster.
  replicas: 4
  service:
    # Configures the service type as Cache.
    type: Cache
    # Sets the number of replicas for each entry across the cluster.
    replicationFactor: 2
  # Enables and configures automatic scaling.
  autoscale:
    maxMemUsagePercent: 70
    maxReplicas: 5
    minMemUsagePercent: 30
    minReplicas: 2
  # Configures authentication and encryption.
  security:
    # Defines a secret with custom credentials.
    endpointSecretName: endpoint-identities
    # Adds a custom TLS certificate to encrypt client connections.
    endpointEncryption:
        type: Secret
        certSecretName: tls-secret
  # Sets container resources.
  container:
    extraJvmOpts: "-XX:NativeMemoryTracking=summary"
    cpu: "2000m"
    memory: 1Gi
  # Configures logging levels.
  logging:
    categories:
      org.infinispan: trace
      org.jgroups: trace
  # Configures how the cluster is exposed on the network.
  expose:
    type: LoadBalancer
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app: infinispan-pod
              clusterName: example-infinispan
              infinispan_cr: example-infinispan
          topologyKey: "kubernetes.io/hostname"

3.3. Creating Data Grid Service Nodes

To use custom cache definitions along with Infinispan capabilities such as cross-site replication, create clusters of Data Grid Service nodes.

Procedure
  1. Specify DataGrid as the value for spec.service.type in your Infinispan CR.

    apiVersion: infinispan.org/v1
    kind: Infinispan
    metadata:
      name: example-infinispan
    spec:
      replicas: 2
      service:
        type: DataGrid

    You cannot change the spec.service.type field after you create nodes. To change the service type, you must delete the existing nodes and create new ones.

  2. Configure nodes with any other Data Grid Service resources.

  3. Apply your Infinispan CR to create the cluster.

3.3.1. Data Grid Service Resources

apiVersion: infinispan.org/v1
kind: Infinispan
metadata:
  # Names the cluster.
  name: example-infinispan
spec:
  # Specifies the number of nodes in the cluster.
  replicas: 6
  service:
    # Configures the service type as Data Grid.
    type: DataGrid
    # Configures storage resources.
    container:
      storage: 2Gi
      ephemeralStorage: false
      storageClassName: my-storage-class
    # Configures cross-site replication.
    sites:
      local:
      name: azure
      expose:
        type: LoadBalancer
      # Configures backup locations.
      locations:
      - name: azure
        url: openshift://api.azure.host:6443
        secretName: azure-token
      - name: aws
        # Specifies the cluster name at the backup location.
        clusterName: example-infinispan
        # Specifies the namespace for the cluster at the backup location.
        namespace: ispn-namespace
        url: openshift://api.aws.host:6443
        secretName: aws-token
  # Configures authentication and encryption.
  security:
    # Defines a secret with custom credentials.
    endpointSecretName: endpoint-identities
    # Adds a custom TLS certificate to encrypt client connections.
    endpointEncryption:
        type: Secret
        certSecretName: tls-secret
  # Sets container resources.
  container:
    extraJvmOpts: "-XX:NativeMemoryTracking=summary"
    cpu: "1000m"
    memory: 1Gi
  # Configures logging levels.
  logging:
    categories:
      org.infinispan: debug
      org.jgroups: debug
      org.jgroups.protocols.TCP: error
      org.jgroups.protocols.relay.RELAY2: error
  # Configures how the cluster is exposed on the network.
  expose:
    type: LoadBalancer
  # Configures affinity and anti-affinity strategies.
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app: infinispan-pod
              clusterName: example-infinispan
              infinispan_cr: example-infinispan
          topologyKey: "kubernetes.io/hostname"

3.4. Adding Labels to Infinispan Resources

Attach key/value labels to pods and services that Infinispan Operator creates and manages. These labels help you identify relationships between objects to better organize and monitor Infinispan resources.

Procedure
  1. Open your Infinispan CR for editing.

  2. Add any labels that you want Infinispan Operator to attach to resources with metadata.annotations.

  3. Add values for your labels with metadata.labels.

    apiVersion: infinispan.org/v1
    kind: Infinispan
    metadata:
      annotations:
        # Add labels that you want to attach to services.
        infinispan.org/targetLabels: svc-label1, svc-label2
        # Add labels that you want to attach to pods.
        infinispan.org/podTargetLabels: pod-label1, pod-label2
      labels:
        # Add values for your labels.
        svc-label1: svc-value1
        svc-label2: svc-value2
        pod-label1: pod-value1
        pod-label2: pod-value2
        # The operator does not attach these labels to resources.
        my-label: my-value
        environment: development
  4. Apply your Infinispan CR.

3.4.1. Global Labels for Infinispan Operator

Global labels are automatically propagated to all Infinispan pods and services.

You can add and modify global labels for Infinispan Operator with the env field in the operator yaml.

# Defines global labels for services.
- name: INFINISPAN_OPERATOR_TARGET_LABELS
  value: |
    {"svc-label1":"svc-value1",
     "svc-label2":"svc-value2"}
# Defines global labels for pods.
- name: INFINISPAN_OPERATOR_POD_TARGET_LABELS
  value: |
    {"pod-label1":"pod-value1",
     "pod-label2":"pod-value2"}

4. Adjusting Container Specifications

You can allocate CPU and memory resources, specify JVM options, and configure storage for Infinispan nodes.

4.1. JVM, CPU, and Memory Resources

spec:
  ...
  container:
    extraJvmOpts: "-XX:NativeMemoryTracking=summary" (1)
    cpu: "1000m" (2)
    memory: 1Gi (3)
1 Specifies JVM options.
2 Allocates host CPU resources to node, measured in CPU units.
3 Allocates host memory resources to nodes, measured in bytes.

When Infinispan Operator creates Infinispan clusters, it uses spec.container.cpu and spec.container.memory to:

  • Ensure that Kubernetes has sufficient capacity to run the Infinispan node. By default Infinispan Operator requests 512Mi of memory and 0.5 cpu from the Kubernetes scheduler.

  • Constrain node resource usage. Infinispan Operator sets the values of cpu and memory as resource limits.

Garbage collection logging

By default, Infinispan Operator does not log garbage collection (GC) messages. You can optionally add the following JVM options to direct GC messages to stdout:

extraJvmOpts: "-Xlog:gc*:stdout:time,level,tags"

4.2. Storage Resources

By default, Infinispan Operator allocates 1Gi for storage for both Cache Service and Data Grid Service nodes. You can configure storage resources for Data Grid Service nodes but not Cache Service nodes.

spec:
  ...
  service:
    type: DataGrid
    container:
      storage: 2Gi (1)
      ephemeralStorage: false (2)
      storageClassName: my-storage-class (3)
1 Configures the storage size for Data Grid Service nodes.
2 Defines whether storage is ephemeral or permanent. Set the value to true to use ephemeral storage, which means all data in storage is deleted when clusters shut down or restart. The default value is false, which means storage is permanent.
3 Specifies the name of a StorageClass object to use for the persistent volume claim. If you include this field, you must specify an existing storage class as the value. If you do not include this field, the persistent volume claim uses the storage class that has the storageclass.kubernetes.io/is-default-class annotation set to true.
Persistent Volume Claims

Infinispan Operator mounts persistent volumes at:
/opt/infinispan/server/data

Persistent volume claims use the ReadWriteOnce (RWO) access mode.

5. Stopping and Starting Infinispan Clusters

Stop and start Infinispan clusters with Infinispan Operator.

Cache definitions

Both Cache Service and Data Grid Service store permanent cache definitions in persistent volumes so they are still available after cluster restarts.

Data

Data Grid Service nodes can write all cache entries to persistent storage during cluster shutdown if you add a file-based cache store.

5.1. Shutting Down Infinispan Clusters

Shutting down Cache Service nodes removes all data in the cache. For Data Grid Service nodes, you should configure the storage size for Data Grid Service nodes to ensure that the persistent volume can hold all your data.

If the available container storage is less than the amount of memory available to Data Grid Service nodes, Infinispan writes the following exception to logs and data loss occurs during shutdown:

WARNING: persistent volume size is less than memory size. Graceful shutdown may not work.
Procedure
  • Set the value of replicas to 0 and apply the changes.

spec:
  replicas: 0

5.2. Restarting Infinispan Clusters

Complete the following procedure to restart Infinispan clusters after shutdown.

Prerequisites

For Data Grid Service nodes, you must restart clusters with the same number of nodes before shutdown. For example, you shut down a cluster of 6 nodes. When you restart that cluster, you must specify 6 as the value for spec.replicas.

This allows Infinispan to restore the distribution of data across the cluster. When all nodes in the cluster are running, you can then add or remove nodes.

You can find the correct number of nodes for Infinispan clusters as follows:

$ kubectl get infinispan example-infinispan -o=jsonpath='{.status.replicasWantedAtRestart}'
Procedure
  • Set the value of spec.replicas to the appropriate number of nodes for your cluster, for example:

    spec:
      replicas: 6

6. Configuring Network Access to Infinispan

Expose Infinispan clusters so you can access Infinispan Console, the Infinispan command line interface (CLI), REST API, and Hot Rod endpoint.

6.1. Getting the Service for Internal Connections

By default, Infinispan Operator creates a service that provides access to Infinispan clusters from clients running on Kubernetes.

This internal service has the same name as your Infinispan cluster, for example:

metadata:
  name: example-infinispan
Procedure
  • Check that the internal service is available as follows:

    $ kubectl get services
    
    NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)
    example-infinispan ClusterIP   192.0.2.0        <none>        11222/TCP
Additional resources

6.2. Exposing Infinispan Through Load Balancers

Use a load balancer service to make Infinispan clusters available to clients running outside Kubernetes.

To access Infinispan with unencrypted Hot Rod client connections you must use a load balancer service.

Procedure
  1. Include spec.expose in your Infinispan CR.

  2. Specify LoadBalancer as the service type with spec.expose.type.

    spec:
      ...
      expose:
        type: LoadBalancer (1)
        nodePort: 30000 (2)
    1 Exposes Infinispan on the network through a load balancer service on port 11222.
    2 Optionally defines a node port to which the load balancer service forwards traffic.
  3. Apply the changes.

  4. Verify that the -external service is available.

    $ kubectl get services | grep external
    
    NAME                         TYPE            CLUSTER-IP    EXTERNAL-IP   PORT(S)
    example-infinispan-external  LoadBalancer    192.0.2.24    hostname.com  11222/TCP

6.3. Exposing Infinispan Through Node Ports

Use a node port service to expose Infinispan clusters on the network.

Procedure
  1. Include spec.expose in your Infinispan CR.

  2. Specify NodePort as the service type with spec.expose.type.

    spec:
      ...
      expose:
        type: NodePort (1)
        nodePort: 30000 (2)
    1 Exposes Infinispan on the network through a node port service.
    2 Defines the port where Infinispan is exposed. If you do not define a port, the platform selects one.
  3. Apply the changes.

  4. Verify that the -external service is available.

    $ kubectl get services | grep external
    
    NAME                         TYPE            CLUSTER-IP       EXTERNAL-IP   PORT(S)
    example-infinispan-external  NodePort        192.0.2.24       <none>        11222:30000/TCP

6.4. Exposing Infinispan Through Routes

Use a Kubernetes Ingress or an OpenShift Route with passthrough encryption to make Infinispan clusters available on the network.

Procedure
  1. Include spec.expose in your Infinispan CR.

  2. Specify Route as the service type with spec.expose.type.

  3. Optionally add a hostname with spec.expose.host.

    spec:
      ...
      expose:
        type: Route (1)
        host: www.example.org (2)
    1 Exposes Infinispan on the network through a Kubernetes Ingress or OpenShift Route.
    2 Optionally specifies the hostname where Infinispan is exposed.
  4. Apply the changes.

  5. Verify that the route is available.

    $ kubectl get ingress
    
    NAME                 CLASS    HOSTS   ADDRESS   PORTS   AGE
    example-infinispan   <none>   *                 443     73s
Route ports

When you create a route, it exposes a port on the network that accepts client connections and redirects traffic to Infinispan services that listen on port 11222.

The port where the route is available depends on whether you use encryption or not.

Port Description

80

Encryption is disabled.

443

Encryption is enabled.

7. Securing Infinispan Connections

Secure client connections with authentication and encryption to prevent network intrusion and protect your data.

7.1. Configuring Authentication

Application users need credentials to access Infinispan clusters. You can use default, generated credentials or add your own.

7.1.1. Default Credentials

Infinispan Operator generates base64-encoded default credentials stored in an authentication secrets named

Username Secret name Description

developer

example-infinispan-generated-secret

Default application user.

operator

example-infinispan-generated-operator-secret

User that interacts with Infinispan resources.

7.1.2. Retrieving Credentials

Get credentials from authentication secrets to access Infinispan clusters.

Procedure
  • Retrieve credentials from authentication secrets.

    $ kubectl get secret example-infinispan-generated-secret

    Base64-decode credentials.

    $ kubectl get secret example-infinispan-generated-secret \
    -o jsonpath="{.data.identities\.yaml}" | base64 --decode
    
    credentials:
    - username: developer
      password: dIRs5cAAsHIeeRIL

7.1.3. Adding Custom Credentials

Configure access to Infinispan cluster endpoints with custom credentials.

Procedure
  1. Create an identities.yaml file with the credentials that you want to add.

    credentials:
    - username: testuser
      password: testpassword
    - username: operator
      password: supersecretoperatorpassword

    identities.yaml must include the operator user.

  2. Create an authentication secret from identities.yaml.

    $ kubectl create secret generic --from-file=identities.yaml connect-secret
  3. Specify the authentication secret with spec.security.endpointSecretName in your Infinispan CR and then apply the changes.

    spec:
      ...
      security:
        endpointSecretName: connect-secret (1)
    1 Specifies the name of the authentication secret that contains your credentials.

Modifying spec.security.endpointSecretName triggers a cluster restart. You can watch the Infinispan cluster as Infinispan Operator applies changes:

$ kubectl get pods -w

7.1.4. Disabling Authentication

Allow users to access Infinispan clusters and data without providing credentials.

Do not disable authentication if endpoints are accessible from outside the Kubernetes cluster via spec.expose.type.

Procedure
  • Set false as the value for the spec.security.endpointAuthentication field in your Infinispan CR and then apply the changes.

    spec:
      ...
      security:
        endpointAuthentication: false (1)
    1 Disables user authentication.

7.2. Configuring Encryption

Encrypt connections between clients and Infinispan nodes with Red Hat OpenShift service certificates or custom TLS certificates.

7.2.1. Encryption with Red Hat OpenShift Service Certificates

Infinispan Operator automatically generates TLS certificates that are signed by the Red Hat OpenShift service CA. Infinispan Operator then stores the certificates and keys in a secret so you can retrieve them and use with remote clients.

If the Red Hat OpenShift service CA is available, Infinispan Operator adds the following spec.security.endpointEncryption configuration to the Infinispan CR:

spec:
  ...
  security:
    endpointEncryption:
      type: Service
      certServiceName: service.beta.openshift.io (1)
      certSecretName: example-infinispan-cert-secret (2)
1 Specifies the Red Hat OpenShift Service.
2 Names the secret that contains a service certificate, tls.crt, and key, tls.key, in PEM format. If you do not specify a name, Infinispan Operator uses <cluster_name>-cert-secret.

Service certificates use the internal DNS name of the Infinispan cluster as the common name (CN), for example:

Subject: CN = example-infinispan.mynamespace.svc

For this reason, service certificates can be fully trusted only inside OpenShift. If you want to encrypt connections with clients running outside OpenShift, you should use custom TLS certificates.

Service certificates are valid for one year and are automatically replaced before they expire.

7.2.2. Retrieving TLS Certificates

Get TLS certificates from encryption secrets to create client trust stores.

  • Retrieve tls.crt from encryption secrets as follows:

$ kubectl get secret example-infinispan-cert-secret \
-o jsonpath='{.data.tls\.crt}' | base64 --decode > tls.crt

7.2.3. Disabling Encryption

You can disable encryption so clients do not need TLS certificates to establish connections with Infinispan.

Do not disable encryption if endpoints are accessible from outside the Kubernetes cluster via spec.expose.type.

Procedure
  • Set None as the value for the spec.security.endpointEncryption.type field in your Infinispan CR and then apply the changes.

    spec:
      ...
      security:
        endpointEncryption:
                type: None (1)
    1 Disables encryption for Infinispan endpoints.

7.2.4. Using Custom TLS Certificates

Use custom PKCS12 keystore or TLS certificate/key pairs to encrypt connections between clients and Infinispan clusters.

Prerequisites
Procedure
  1. Add the encryption secret to your OpenShift namespace, for example:

    $ kubectl apply -f tls_secret.yaml
  2. Specify the encryption secret with spec.security.endpointEncryption in your Infinispan CR and then apply the changes.

    spec:
      ...
      security:
        endpointEncryption: (1)
                type: Secret (2)
                certSecretName: tls-secret (3)
    1 Encrypts traffic to and from Infinispan endpoints.
    2 Configures Infinispan to use secrets that contain encryption certificates.
    3 Names the encryption secret.
Certificate Secrets
apiVersion: v1
kind: Secret
metadata:
  name: tls-secret
type: Opaque
data:
    tls.key:  "LS0tLS1CRUdJTiBQUk ..." (1)
    tls.crt: "LS0tLS1CRUdJTiBDRVl ..." (2)
1 Adds a base64-encoded TLS key.
2 Adds a base64-encoded TLS certificate.
Keystore Secrets
apiVersion: v1
kind: Secret
metadata:
  name: tls-secret
type: Opaque
stringData:
    alias: server (1)
    password: password (2)
data:
    keystore.p12:  "MIIKDgIBAzCCCdQGCSqGSIb3DQEHA..." (3)
1 Specifies an alias for the keystore.
2 Specifies a password for the keystore.
3 Adds a base64-encoded keystore.

8. Setting Up Cross-Site Replication

Cross-site replication allows you to back up data from one Infinispan cluster to another.

To set up cross-site replication, you configure Infinispan Operator to automatically discover and manage connections between Infinispan clusters or manually specify the network locations for backup locations.

You can use both automatic and manual connections for Infinispan clusters in the same Infinispan CR. However, you must ensure that Infinispan clusters establish connections in the same way at each site.

8.1. Automatically Connecting Infinispan Clusters

Configure Infinispan Operator to discover and manage connections for Infinispan clusters performing cross-site replication.

To automatically connect Infinispan clusters, Infinispan Operator in each OpenShift cluster must have network access to the Kubernetes API.

8.1.1. Cross-Site Replication with Infinispan Operator

The following illustration provides an example in which Infinispan Operator manages a Infinispan cluster at a data center in New York City, NYC. At another data center in London, LON, Infinispan Operator also manages a Infinispan cluster.

xsite ispn

Infinispan Operator uses the Kubernetes API to establish a secure connection between the OpenShift Container Platform clusters in NYC and LON. Infinispan Operator then creates a cross-site replication service so Infinispan clusters can back up data across locations.

When you configure automatic connections, Infinispan clusters do not start running until Infinispan Operator can establish connections with all backup locations in the configuration.

Each Infinispan cluster has one site master node that coordinates all backup requests. Infinispan Operator identifies the site master node so that all traffic through the cross-site replication service goes to the site master.

If the current site master node goes offline then a new node becomes site master. Infinispan Operator automatically finds the new site master node and updates the cross-site replication service to forward backup requests to it.

8.1.2. Kubernetes clusters

Apply cluster roles and then create site access secrets if you run Infinispan Operator on vanilla Kubernetes or minikube.

Applying Cluster Roles for Cross-Site Replication

During OLM installation, Infinispan Operator sets up cluster roles required for cross-site replication. If you install Infinispan Operator manually, you must complete this procedure to set up those cluster roles.

Procedure
  • Install clusterrole.yaml and clusterrole_binding.yaml as follows:

$ kubectl apply -f deploy/clusterrole.yaml
$ kubectl apply -f deploy/clusterrole_binding.yaml
Creating Kubernetes Site Access Secrets

If you run Infinispan Operator in any Kubernetes deployment (Minikube, Kind, etc.), you should create secrets that contain the files that allow Kubernetes clusters to authenticate with each other.

Do one of the following:

  • Retrieve service account tokens from each site and then add them to secrets on each backup location, for example:

    $ kubectl create serviceaccount site-a -n ns-site-a
    $ kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=ns-site-a:site-a
    $ TOKENNAME=kubectl get serviceaccount/site-a -o jsonpath='{.secrets[0].name}' -n ns-site-a
    $ TOKEN=kubectl get secret $TOKENNAME -o jsonpath='{.data.token}' -n ns-site-a | base64 --decode
    $ kubectl create secret generic site-a-secret -n ns-site-a --from-literal=token=$TOKEN
  • Create secrets on each site that contain ca.crt, client.crt, and client.key from your Kubernetes installation.

    For example, for Minikube do the following on LON:

    $ kubectl create secret generic site-a-secret \
        --from-file=certificate-authority=/opt/minikube/.minikube/ca.crt \
        --from-file=client-certificate=/opt/minikube/.minikube/client.crt \
        --from-file=client-key=/opt/minikube/.minikube/client.key

8.1.3. OpenShift clusters

Create and exchange service account tokens if you run Infinispan Operator on vanilla Kubernetes or minikube.

Creating Service Account Tokens

Generate service account tokens on each OpenShift cluster that acts as a backup location. Clusters use these tokens to authenticate with each other so Infinispan Operator can create a cross-site replication service.

Procedure
  1. Log in to an OpenShift cluster.

  2. Create a service account.

    For example, create a service account at LON:

    $ kubectl create sa lon
    serviceaccount/lon created
  3. Add the view role to the service account with the following command:

    $ oc policy add-role-to-user view system:serviceaccount:<namespace>:lon
  4. Repeat the preceding steps on your other OpenShift clusters.

Exchanging Service Account Tokens

After you create service account tokens on your OpenShift clusters, you add them to secrets on each backup location. For example, at LON you add the service account token for NYC. At NYC you add the service account token for LON.

Prerequisites
  • Get tokens from each service account.

    Use the following command or get the token from the OpenShift Web Console:

    $ oc sa get-token lon
    
    eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9...
Procedure
  1. Log in to an OpenShift cluster.

  2. Add the service account token for a backup location with the following command:

    $ oc create secret generic <token-name> --from-literal=token=<token>

    For example, log in to the OpenShift cluster at NYC and create a lon-token secret as follows:

    $ oc create secret generic lon-token --from-literal=token=eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9...
  3. Repeat the preceding steps on your other OpenShift clusters.

8.1.4. Configuring Infinispan Operator to Handle Cross-Site Connections

Configure Infinispan Operator to establish cross-site views with Infinispan clusters.

Prerequisites
  • Create secrets that contain service account tokens for each backup location.

Procedure
  1. Create an Infinispan CR for each Infinispan cluster.

  2. Specify the name of the local site with spec.service.sites.local.name.

  3. Provide the name, URL, and secret for each Infinispan cluster that acts as a backup location with spec.service.sites.locations.

  4. If Infinispan cluster names or namespaces at the remote site do not match the local site, specify those values with the clusterName and namespace fields.

    The following are example Infinispan CR definitions for LON and NYC:

    • LON

      apiVersion: infinispan.org/v1
      kind: Infinispan
      metadata:
        name: example-infinispan
      spec:
        replicas: 3
        service:
          type: DataGrid
          sites:
            local:
              name: LON
              expose:
                type: LoadBalancer
            locations:
              - name: LON
                url: openshift://api.rhdg-lon.openshift-aws.myhost.com:6443
                secretName: lon-token
              - name: NYC
                clusterName: nyc-cluster
                namespace: nyc-cluster-namespace
                url: openshift://api.rhdg-nyc.openshift-aws.myhost.com:6443
                secretName: nyc-token
    • NYC

      apiVersion: infinispan.org/v1
      kind: Infinispan
      metadata:
        name: nyc-cluster
      spec:
        replicas: 2
        service:
          type: DataGrid
          sites:
            local:
              name: NYC
              expose:
                type: LoadBalancer
            locations:
              - name: NYC
                url: openshift://api.rhdg-nyc.openshift-aws.myhost.com:6443
                secretName: nyc-token
              - name: LON
                clusterName: example-infinispan
                namespace: ispn-namespace
                url: openshift://api.rhdg-lon.openshift-aws.myhost.com:6443
                secretName: lon-token
  5. Adjust logging levels for cross-site replication as follows:

    spec:
      ...
      logging:
        categories:
          org.jgroups.protocols.TCP: error
          org.jgroups.protocols.relay.RELAY2: error

    The preceding configuration decreases logging for JGroups TCP and RELAY2 protocols to reduce excessive messages about cluster backup operations, which can result in a large number of log files that use container storage.

  6. Configure nodes with any other Data Grid Service resources.

  7. Apply the Infinispan CRs.

  8. Verify that Infinispan clusters form a cross-site view.

    1. Retrieve the Infinispan CR.

      $ kubectl get infinispan -o yaml
    2. Check for the type: CrossSiteViewFormed condition.

Next steps

If your clusters have formed a cross-site view, you can start adding backup locations to caches.

8.1.5. Resources for Automatic Cross-Site Connections

spec:
  ...
  service:
    type: DataGrid (1)
    sites:
      local:
        name: LON (2)
        expose:
          type: LoadBalancer (3)
      locations: (4)
      - name: LON (5)
        url: openshift://api.site-a.devcluster.openshift.com:6443 (6)
        secretName: lon-token (7)
      - name: NYC
        clusterName: nyc-cluster-name (8)
        namespace: nyc-cluster-namespace (9)
        url: openshift://api.site-b.devcluster.openshift.com:6443
        secretName: nyc-token
  logging:
    categories:
      org.jgroups.protocols.TCP: error (10)
      org.jgroups.protocols.relay.RELAY2: error (11)
1 Specifies Data Grid Service. Infinispan supports cross-site replication with Data Grid Service clusters only.
2 Names the local site for a Infinispan cluster.
3 Defines the externally exposed service.
  • Use NodePort for local clusters on the same network.

  • Use LoadBalancer for independent OpenShift clusters.

4 Provides connection information for all backup locations.
5 Specifies a backup location that matches .spec.service.sites.local.name.
6 Specifies a backup location.
  • Use kubernetes:// if the backup location is a Kubernetes instance.

  • Use openshift:// if the backup location is an OpenShift cluster. You should specify the URL of the Kubernetes API.

  • Use infinispan+xsite:// if the backup location has a static hostname and port.

7 Specifies the access secret for a site.

This secret contains different authentication objects, depending on your Kubernetes environment.

8 Specifies the cluster name at the backup location if it is different to the cluster name at the local site.
9 Specifies the namespace of the Infinispan cluster at the backup location if it does not match the namespace at the local site.
10 Logs error messages for the JGroups TCP protocol.
11 Logs error messages for the JGroups RELAY2 protocol.

8.2. Manually Connecting Infinispan Clusters

Configure cross-site replication manually so Infinispan clusters running on OpenShift can back up to clusters running in platforms other than Kubernetes.

Manually configuring cross-site replication is also necessary when access to the Kubernetes API is not available outside the Kubernetes cluster where Infinispan runs.

8.2.1. Specifying Static Hosts and Ports for Infinispan Clusters

Specify static hosts and ports for Infinispan clusters so they can establish connections and form cross-site views.

Prerequisites
  • Have the host names and ports for each Infinispan cluster that you plan to configure as a backup location.

Procedure
  1. Create an Infinispan CR for each Infinispan cluster.

  2. Specify the name of the local site with spec.service.sites.local.name.

  3. Provide the name and static URL for each Infinispan cluster that acts as a backup location with spec.service.sites.locations, for example:

    • LON

      apiVersion: infinispan.org/v1
      kind: Infinispan
      metadata:
        name: example-infinispan
      spec:
        replicas: 3
        service:
          type: DataGrid
          sites:
            local:
              name: LON
              expose:
                type: LoadBalancer
            locations:
              - name: LON
                url: infinispan+xsite://infinispan-lon.myhost.com:7900
              - name: NYC
                url: infinispan+xsite://infinispan-nyc.myhost.com:7900
    • NYC

      apiVersion: infinispan.org/v1
      kind: Infinispan
      metadata:
        name: example-infinispan
      spec:
        replicas: 2
        service:
          type: DataGrid
          sites:
            local:
              name: NYC
              expose:
                type: LoadBalancer
            locations:
              - name: NYC
                url: infinispan+xsite://infinispan-nyc.myhost.com:7900
              - name: LON
                url: infinispan+xsite://infinispan-lon.myhost.com
  4. Adjust logging levels for cross-site replication as follows:

    spec:
      ...
      logging:
        categories:
          org.jgroups.protocols.TCP: error
          org.jgroups.protocols.relay.RELAY2: error

    The preceding configuration decreases logging for JGroups TCP and RELAY2 protocols to reduce excessive messages about cluster backup operations, which can result in a large number of log files that use container storage.

  5. Configure nodes with any other Data Grid Service resources.

  6. Apply the Infinispan CRs.

  7. Verify that Infinispan clusters form a cross-site view.

    1. Retrieve the Infinispan CR.

      $ kubectl get infinispan -o yaml
    2. Check for the type: CrossSiteViewFormed condition.

Next steps

If your clusters have formed a cross-site view, you can start adding backup locations to caches.

8.2.2. Resources for Manual Cross-Site Connections

spec:
  ...
  service:
    type: DataGrid (1)
    sites:
      local:
        name: LON (2)
        expose:
          type: LoadBalancer (3)
      locations: (4)
      - name: LON (5)
        url: infinispan+xsite://infinispan-lon.myhost.com:7900 (6)
      - name: NYC
        url: infinispan+xsite://infinispan-nyc.myhost.com:7900
  logging:
    categories:
      org.jgroups.protocols.TCP: error (7)
      org.jgroups.protocols.relay.RELAY2: error (8)
1 Specifies Data Grid Service. Infinispan supports cross-site replication with Data Grid Service clusters only.
2 Names the local site for a Infinispan cluster.
3 Defines the externally exposed service.
  • Use NodePort for local clusters on the same network.

  • Use LoadBalancer for independent OpenShift clusters.

4 Provides connection information for all backup locations.
5 Specifies a backup location that matches .spec.service.sites.local.name.
6 Specifies the static URL for the backup location in the format of infinispan+xsite://<hostname>:<port>. The default port is 7900.
7 Logs error messages for the JGroups TCP protocol.
8 Logs error messages for the JGroups RELAY2 protocol.

8.3. Configuring Sites in the Same Kubernetes Cluster

For evaluation and demonstration purposes, you can configure Infinispan to back up between nodes in the same Kubernetes cluster.

Procedure
  1. Create an Infinispan CR for each Infinispan cluster.

  2. Specify the name of the local site with spec.service.sites.local.name.

  3. Set ClusterIP as the value of the spec.service.sites.local.expose.type field.

  4. Provide the Infinispan cluster hostname as the URL for each backup location with spec.service.sites.locations.

    The following is an example Infinispan CR definition:

    apiVersion: infinispan.org/v1
    kind: Infinispan
    metadata:
      name: example-clustera
    spec:
      replicas: 1
      expose:
        type: LoadBalancer
      service:
        type: DataGrid
        sites:
          local:
            name: SiteA
            expose:
              type: ClusterIP
          locations:
            - name: SiteA
              url: infinispan+xsite://example-clustera-site (1)
            - name: SiteB
              url: infinispan+xsite://example-clusterb-site
    1 The value of the url field is the Kubernetes service name that resolves to an internal IP address.
  5. Configure nodes with any other Data Grid Service resources.

  6. Apply the Infinispan CRs.

  7. Verify that Infinispan clusters form a cross-site view.

    1. Retrieve the Infinispan CR.

      $ kubectl get infinispan -o yaml
    2. Check for the type: CrossSiteViewFormed condition.

9. Creating Caches with Infinispan Operator

Use Cache CRs to add cache configuration with Infinispan Operator and control how Infinispan stores your data.

9.1. Infinispan Caches

Cache configuration defines the characteristics and features of the data store and must be valid with the Infinispan schema. Infinispan recommends creating standalone files in XML or JSON format that define your cache configuration. You should separate Infinispan configuration from application code for easier validation and to avoid the situation where you need to maintain XML snippets in Java or some other client language.

To create caches with Infinispan clusters running on Kubernetes, you should:

  • Use Cache CR as the mechanism for creating caches through the Kubernetes front end.

  • Use Batch CR to create multiple caches at a time from standalone configuration files.

  • Access Infinispan Console and create caches in XML or JSON format.

You can use Hot Rod or HTTP clients but Infinispan recommends Cache CR or Batch CR unless your specific use case requires programmatic remote cache creation.

9.2. Cache CRs

The Cache CR is not yet functionally complete. The capability to create caches with Infinispan Operator is still under development and not recommended for production environments or critical workloads.

When using Cache CRs, the following rules apply:

  • Cache CRs apply to Data Grid Service nodes only.

  • You can create a single cache for each Cache CR.

  • If your Cache CR contains both a template and an XML configuration, Infinispan Operator uses the template.

  • If you edit caches in the OpenShift Web Console, the changes are reflected through the user interface but do not take effect on the Infinispan cluster. You cannot edit caches. To change cache configuration, you must first delete the cache through the console or CLI and then re-create the cache.

  • Deleting Cache CRs in the OpenShift Web Console does not remove caches from Infinispan clusters. You must delete caches through the console or CLI.

In previous versions, you need to add credentials to a secret so that Infinispan Operator can access your cluster when creating caches.

That is no longer necessary. Infinispan Operator uses the operator user and corresponding password to perform cache operations.

9.3. Creating Infinispan Caches from XML

Complete the following steps to create caches on Data Grid Service clusters using valid infinispan.xml cache definitions.

Procedure
  1. Create a Cache CR that contains the XML cache definition you want to create.

    apiVersion: infinispan.org/v2alpha1
    kind: Cache
    metadata:
      name: mycachedefinition (1)
    spec:
      clusterName: example-infinispan (2)
      name: mycache (3)
      template: <infinispan><cache-container><distributed-cache name="mycache" mode="SYNC"><persistence><file-store/></persistence></distributed-cache></cache-container></infinispan> (4)
    1 Names the Cache CR.
    2 Specifies the name of the target Infinispan cluster where you want Infinispan Operator to create the cache.
    3 Names the cache on the Infinispan cluster.
    4 Specifies the XML cache definition to create the cache. Note that the name attribute is ignored. Only spec.name applies to the resulting cache.
  2. Apply the Cache CR, for example:

    $ kubectl apply -f mycache.yaml
    cache.infinispan.org/mycachedefinition created

9.4. Creating Infinispan Caches from Templates

Complete the following steps to create caches on Data Grid Service clusters using cache configuration templates.

Prerequisites
  • Identify the cache configuration template you want to use for your cache. You can find a list of available configuration templates in Infinispan Console.

Procedure
  1. Create a Cache CR that specifies the name of the template you want to use.

    For example, the following CR creates a cache named "mycache" that uses the org.infinispan.DIST_SYNC cache configuration template:

    apiVersion: infinispan.org/v2alpha1
    kind: Cache
    metadata:
      name: mycachedefinition (1)
    spec:
      clusterName: example-infinispan (2)
      name: mycache (3)
      templateName: org.infinispan.DIST_SYNC (4)
    1 Names the Cache CR.
    2 Specifies the name of the target Infinispan cluster where you want Infinispan Operator to create the cache.
    3 Names the Infinispan cache instance.
    4 Specifies the infinispan.org cache configuration template to create the cache.
  2. Apply the Cache CR, for example:

    $ kubectl apply -f mycache.yaml
    cache.infinispan.org/mycachedefinition created

9.5. Adding Backup Locations to Caches

When you configure Infinispan clusters to perform cross-site replication, you can add backup locations to your cache configurations.

Procedure
  1. Create cache configurations that name remote sites as backup locations.

    Infinispan replicates data based on cache names. For this reason, site names in your cache configurations must match site names, spec.service.sites.local.name, in your Infinispan CRs.

  2. Configure backup locations to go offline automatically with the take-offline element.

    1. Set the amount of time, in milliseconds, before backup locations go offline with the min-wait attribute.

  3. Define any other valid cache configuration.

  4. Add backup locations to the named cache on all sites in the global cluster.

    For example, if you add LON as a backup for NYC you should add NYC as a backup for LON.

The following configuration examples show backup locations for caches:

  • NYC

    <infinispan>
      <cache-container>
        <distributed-cache name="customers">
          <encoding media-type="application/x-protostream"/>
          <backups>
            <backup site="LON" strategy="SYNC">
              <take-offline min-wait="120000"/>
            </backup>
          </backups>
        </distributed-cache>
      </cache-container>
    </infinispan>
  • LON

    <infinispan>
      <cache-container>
        <replicated-cache name="customers">
          <encoding media-type="application/x-protostream"/>
          <backups>
            <backup site="NYC" strategy="ASYNC" >
                <take-offline min-wait="120000"/>
              </backup>
          </backups>
        </replicated-cache>
      </cache-container>
    </infinispan>

9.5.1. Performance Considerations with Taking Backup Locations Offline

Backup locations can automatically go offline when remote sites become unavailable. This prevents nodes from attempting to replicate data to offline backup locations, which can have a performance impact on your cluster because it results in error.

You can configure how long to wait before backup locations go offline. A good rule of thumb is one or two minutes. However, you should test different wait periods and evaluate their performance impacts to determine the correct value for your deployment.

For instance when OpenShift terminates the site master pod, that backup location becomes unavailable for a short period of time until Infinispan Operator elects a new site master. In this case, if the minimum wait time is not long enough then the backup locations go offline. You then need to bring those backup locations online and perform state transfer operations to ensure the data is in sync.

Likewise, if the minimum wait time is too long, node CPU usage increases from failed backup attempts which can lead to performance degradation.

9.6. Adding Persistent Cache Stores

You can add Single File cache stores to Data Grid Service nodes to save data to the persistent volume.

You configure cache stores as part of your Infinispan cache definition with the persistence element as follows:

<persistence>
   <file-store/>
</persistence>

Infinispan then creates a Single File cache store, .dat file, in the /opt/infinispan/server/data directory.

Procedure
  • Add a cache store to your cache configurations as follows:

    <infinispan>
      <cache-container>
        <distributed-cache name="customers" mode="SYNC">
          <encoding media-type="application/x-protostream"/>
          <persistence>
            <file-store/>
          </persistence>
        </distributed-cache>
      </cache-container>
    </infinispan>
Additional resources

10. Running Batch Operations

Infinispan Operator provides a Batch CR that lets you create Infinispan resources in bulk. Batch CR uses the Infinispan command line interface (CLI) in batch mode to carry out sequences of operations.

10.1. Running Inline Batch Operations

Include your batch operations directly in a Batch CR if they do not require separate configuration artifacts.

Procedure
  1. Create a Batch CR.

    1. Specify the name of the Infinispan cluster where you want the batch operations to run as the value of the spec.cluster field.

    2. Add each CLI command to run on a line in the spec.config field.

      apiVersion: infinispan.org/v2alpha1
      kind: Batch
      metadata:
        name: mybatch
      spec:
        cluster: example-infinispan
        config: |
          create cache --template=org.infinispan.DIST_SYNC mycache
          put --cache=mycache hello world
          put --cache=mycache hola mundo
  2. Apply your Batch CR.

    $ kubectl apply -f mybatch.yaml
  3. Check the status.Phase field in the Batch CR to verify the operations completed successfully.

10.2. Running Batch Operations with ConfigMaps

Use a ConfigMap to make additional files, such as Infinispan cache configuration, available for batch operations.

Procedure
  1. Create a ConfigMap for your batch operations.

    1. Create a batch file that contains all commands you want to run.

      The ConfigMap is mounted in Infinispan pods at /etc/batch so you must prepend all --file= directives with that path.

      For example, create a cache named "mycache" from a configuration file and add two entries to it:

      create cache sessions --file=/etc/batch/mycache.xml
      put --cache=mycache hello world
      put --cache=mycache hola mundo
    2. Add all configuration artifacts that batch operations require to the same directory as the batch file.

      $ ls /tmp/mybatch
      
      batch
      mycache.xml
    3. Create a ConfigMap from the directory.

      $ kubectl create configmap mybatch-config-map --from-file=/tmp/mybatch
  2. Create a Batch CR.

    1. Specify the name of the Infinispan cluster where you want the batch operations to run as the value of the spec.cluster field.

    2. Set the name of the ConfigMap that contains your batch file and configuration artifacts with the spec.configMap field.

      apiVersion: infinispan.org/v2alpha1
      kind: Batch
      metadata:
        name: mybatch
      spec:
        cluster: example-infinispan
        configMap: mybatch-config-map
  3. Apply your Batch CR.

    $ kubectl apply -f mybatch.yaml
  4. Check the status.Phase field in the Batch CR to verify the operations completed successfully.

10.3. Batch Status Messages

Verify and troubleshoot batch operations with the status.Phase field in the Batch CR.

Phase Description

Succeeded

All batch operations have completed successfully.

Initializing

Batch operations are queued and resources are initializing.

Initialized

Batch operations are ready to start.

Running

Batch operations are in progress.

Failed

One or more batch operations were not successful.

Failed operations

Batch operations are not atomic. If a command in a batch script fails, it does not affect the other operations or cause them to rollback.

If your batch operations have any server or syntax errors, you can view log messages in the Batch CR in the status.Reason field.

10.4. Example Batch Operations

Use these example batch operations as starting points for creating and modifying Infinispan resources with the Batch CR.

You can pass configuration files to Infinispan Operator only via a ConfigMap.

The ConfigMap is mounted in Infinispan pods at /etc/batch so you must prepend all --file= directives with that path.

10.4.1. Infinispan Users

Create several Infinispan users and assign them roles with varying levels of permission to access caches and interact with Infinispan resources.

echo "creating users..."
create user katie -p changeme1
create user john -p changeme2
create user mark -p changeme3
create user julia -p changeme4
echo "list users"
user ls

10.4.2. Caches

  • Create multiple caches from configuration files.

echo "creating caches..."
create cache sessions --file=/etc/batch/infinispan-prod-sessions.xml
create cache tokens --file=/etc/batch/infinispan-prod-tokens.xml
create cache people --file=/etc/batch/infinispan-prod-people.xml
create cache books --file=/etc/batch/infinispan-prod-books.xml
create cache authors --file=/etc/batch/infinispan-prod-authors.xml
echo "list caches in the cluster"
ls caches
  • Create a template from a file and then create caches from the template.

echo "creating caches..."
create cache mytemplate --file=/etc/batch/mycache.xml
create cache sessions --template=mytemplate
create cache tokens --template=mytemplate
echo "list caches in the cluster"
ls caches

10.4.3. Counters

Use the Batch CR to create multiple counters that can increment and decrement to record the count of objects.

You can use counters to generate identifiers, act as rate limiters, or track the number of times a resource is accessed.

echo "creating counters..."
create counter --concurrency-level=1 --initial-value=5 --storage=PERSISTENT --type=weak mycounter1
create counter --initial-value=3 --storage=PERSISTENT --type=strong mycounter2
create counter --initial-value=13 --storage=PERSISTENT --type=strong --upper-bound=10 mycounter3
echo "list counters in the cluster"
ls counters

10.4.4. Protobuf schema

Register Protobuf schema to query values in caches. Protobuf schema (.proto files) provide metadata about custom entities and controls field indexing.

echo "creating schema..."
schema --upload=person.proto person.proto
schema --upload=book.proto book.proto
schema --upload=author.proto book.proto
echo "list Protobuf schema"
ls schemas

10.4.5. Tasks

Upload tasks that implement org.infinispan.tasks.ServerTask or scripts that are compatible with the javax.script scripting API.

echo "creating tasks..."
task upload --file=/etc/batch/myfirstscript.js myfirstscript
task upload --file=/etc/batch/mysecondscript.js mysecondscript
task upload --file=/etc/batch/mythirdscript.js mythirdscript
echo "list tasks"
ls tasks
Additional resources

11. Establishing Remote Client Connections

Connect to Infinispan clusters from the Infinispan Console, Command Line Interface (CLI), and remote clients.

11.1. Client Connection Details

Before you can connect to Infinispan, you need to retrieve the following pieces of information:

  • Service hostname

  • Port

  • Authentication credentials, if required

  • TLS certificate, if you use encryption

Service hostnames

The service hostname depends on how you expose Infinispan on the network or if your clients are running on Kubernetes.

For clients running on Kubernetes, you can use the name of the internal service that Infinispan Operator creates.

For clients running outside Kubernetes, the service hostname is the location URL if you use a load balancer. For a node port service, the service hostname is the node host name. For a route, the service hostname is either a custom hostname or a system-defined hostname.

Ports

Client connections on Kubernetes and through load balancers use port 11222.

Node port services use a port in the range of 30000 to 60000. Routes use either port 80 (unencrypted) or 443 (encrypted).

11.2. Infinispan Caches

Cache configuration defines the characteristics and features of the data store and must be valid with the Infinispan schema. Infinispan recommends creating standalone files in XML or JSON format that define your cache configuration. You should separate Infinispan configuration from application code for easier validation and to avoid the situation where you need to maintain XML snippets in Java or some other client language.

To create caches with Infinispan clusters running on Kubernetes, you should:

  • Use Cache CR as the mechanism for creating caches through the Kubernetes front end.

  • Use Batch CR to create multiple caches at a time from standalone configuration files.

  • Access Infinispan Console and create caches in XML or JSON format.

You can use Hot Rod or HTTP clients but Infinispan recommends Cache CR or Batch CR unless your specific use case requires programmatic remote cache creation.

11.3. Connecting the Infinispan CLI

Use the command line interface (CLI) to connect to your Infinispan cluster and perform administrative operations.

Prerequisites
  • Download the server distribution so you can run the CLI.

The CLI is available as part of the server distribution, which you can run on your local host to establish remote connections to Infinispan clusters on OpenShift.

Alternatively, you can use the infinispan/cli image at https://github.com/infinispan/infinispan-images.

It is possible to open a remote shell to a Infinispan node and access the CLI.

$ kubectl exec -it example-infinispan-0 -- /bin/bash

However using the CLI in this way consumes memory allocated to the container, which can lead to out of memory exceptions.

  1. Create a CLI connection to your Infinispan cluster.

    $ bin/cli.sh -c https://$SERVICE_HOSTNAME:$PORT --trustall

    Replace $SERVICE_HOSTNAME:$PORT with the hostname and port where Infinispan is available on the network.

  2. Enter your Infinispan credentials when prompted.

  3. Perform CLI operations as required, for example:

    1. List caches configured on the cluster with the ls command.

      [//containers/default]> ls caches
      mycache
    2. View cache configuration with the describe command.

      [//containers/default]> describe caches/mycache

11.4. Accessing Infinispan Console

Access the console to create caches, perform adminstrative operations, and monitor your Infinispan clusters.

Prerequisites
  • Expose Infinispan on the network so you can access the console through a browser.
    For example, configure a load balancer service or create a route.

Procedure
  • Access the console from any browser at $SERVICE_HOSTNAME:$PORT.

    Replace $SERVICE_HOSTNAME:$PORT with the hostname and port where Infinispan is available on the network.

11.5. Hot Rod Clients

Hot Rod is a binary TCP protocol that Infinispan provides for high-performance data transfer capabilities with remote clients.

Client intelligence

Client intelligence refers to mechanisms the Hot Rod protocol provides so that clients can locate and send requests to Infinispan nodes.

Hot Rod clients running on Kubernetes can access internal IP addresses for Infinispan nodes so you can use any client intelligence. The default intelligence, HASH_DISTRIBUTION_AWARE, is recommended because it allows clients to route requests to primary owners, which improves performance.

Hot Rod clients running outside Kubernetes must use BASIC intelligence.

11.5.1. Hot Rod Configuration API

You can programmatically configure Hot Rod client connections with the ConfigurationBuilder interface.

$SERVICE_HOSTNAME:$PORT denotes the hostname and port that allows access to your Infinispan cluster. You should replace these variables with the actual hostname and port for your environment.

On Kubernetes

Hot Rod clients running on Kubernetes can use the following configuration:

import org.infinispan.client.hotrod.configuration.ConfigurationBuilder;
import org.infinispan.client.hotrod.configuration.SaslQop;
import org.infinispan.client.hotrod.impl.ConfigurationProperties;
...

ConfigurationBuilder builder = new ConfigurationBuilder();
      builder.addServer()
               .host("$SERVICE_HOSTNAME")
               .port(ConfigurationProperties.DEFAULT_HOTROD_PORT)
             .security().authentication()
               .username("username")
               .password("password")
               .realm("default")
               .saslQop(SaslQop.AUTH)
               .saslMechanism("SCRAM-SHA-512")
             .ssl()
               .sniHostName("$SERVICE_HOSTNAME")
               .trustStorePath("/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt");
Outside Kubernetes

Hot Rod clients running outside Kubernetes can use the following configuration:

import org.infinispan.client.hotrod.configuration.ClientIntelligence;
import org.infinispan.client.hotrod.configuration.ConfigurationBuilder;
import org.infinispan.client.hotrod.configuration.SaslQop;
...

ConfigurationBuilder builder = new ConfigurationBuilder();
      builder.addServer()
               .host("$SERVICE_HOSTNAME")
               .port("$PORT")
             .security().authentication()
               .username("username")
               .password("password")
               .realm("default")
               .saslQop(SaslQop.AUTH)
               .saslMechanism("SCRAM-SHA-512")
             .ssl()
               .sniHostName("$SERVICE_HOSTNAME")
               .trustStorePath("/path/to/tls.crt");
      builder.clientIntelligence(ClientIntelligence.BASIC);

11.5.2. Hot Rod Client Properties

You can configure Hot Rod client connections with the hotrod-client.properties file on the application classpath.

$SERVICE_HOSTNAME:$PORT denotes the hostname and port that allows access to your Infinispan cluster. You should replace these variables with the actual hostname and port for your environment.

On Kubernetes

Hot Rod clients running on Kubernetes can use the following properties:

# Connection
infinispan.client.hotrod.server_list=$SERVICE_HOSTNAME:$PORT

# Authentication
infinispan.client.hotrod.use_auth=true
infinispan.client.hotrod.auth_username=developer
infinispan.client.hotrod.auth_password=$PASSWORD
infinispan.client.hotrod.auth_server_name=$CLUSTER_NAME
infinispan.client.hotrod.sasl_properties.javax.security.sasl.qop=auth
infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512

# Encryption
infinispan.client.hotrod.sni_host_name=$SERVICE_HOSTNAME
# Path to the TLS certificate.
# Clients automatically generate trust stores from certificates.
infinispan.client.hotrod.trust_store_path=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
Outside Kubernetes

Hot Rod clients running outside Kubernetes can use the following properties:

# Connection
infinispan.client.hotrod.server_list=$SERVICE_HOSTNAME:$PORT

# Client intelligence
infinispan.client.hotrod.client_intelligence=BASIC

# Authentication
infinispan.client.hotrod.use_auth=true
infinispan.client.hotrod.auth_username=developer
infinispan.client.hotrod.auth_password=$PASSWORD
infinispan.client.hotrod.auth_server_name=$CLUSTER_NAME
infinispan.client.hotrod.sasl_properties.javax.security.sasl.qop=auth
infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512

# Encryption
infinispan.client.hotrod.sni_host_name=$SERVICE_HOSTNAME
# Path to the TLS certificate.
# Clients automatically generate trust stores from certificates.
infinispan.client.hotrod.trust_store_path=tls.crt

11.5.3. Creating Caches with Hot Rod Clients

You can remotely create caches on Infinispan clusters running on Kubernetes with Hot Rod clients. However, Infinispan recommends that you create caches using Infinispan Console, the CLI, or with Cache CRs instead of with Hot Rod clients.

Programmatically creating caches

The following example shows how to add cache configurations to the ConfigurationBuilder and then create them with the RemoteCacheManager:

import org.infinispan.client.hotrod.DefaultTemplate;
import org.infinispan.client.hotrod.RemoteCache;
import org.infinispan.client.hotrod.RemoteCacheManager;
...

      builder.remoteCache("my-cache")
             .templateName(DefaultTemplate.DIST_SYNC);
      builder.remoteCache("another-cache")
             .configuration("<infinispan><cache-container><distributed-cache name=\"another-cache\"><encoding media-type=\"application/x-protostream\"/></distributed-cache></cache-container></infinispan>");
      try (RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build())) {
      // Get a remote cache that does not exist.
      // Rather than return null, create the cache from a template.
      RemoteCache<String, String> cache = cacheManager.getCache("my-cache");
      // Store a value.
      cache.put("hello", "world");
      // Retrieve the value and print it.
      System.out.printf("key = %s\n", cache.get("hello"));

This example shows how to create a cache named CacheWithXMLConfiguration using the XMLStringConfiguration() method to pass the cache configuration as XML:

import org.infinispan.client.hotrod.RemoteCacheManager;
import org.infinispan.commons.configuration.XMLStringConfiguration;
...

private void createCacheWithXMLConfiguration() {
    String cacheName = "CacheWithXMLConfiguration";
    String xml = String.format("<infinispan>" +
                                  "<cache-container>" +
                                  "<distributed-cache name=\"%s\" mode=\"SYNC\">" +
                                    "<encoding media-type=\"application/x-protostream\"/>" +
                                    "<locking isolation=\"READ_COMMITTED\"/>" +
                                    "<transaction mode=\"NON_XA\"/>" +
                                    "<expiration lifespan=\"60000\" interval=\"20000\"/>" +
                                  "</distributed-cache>" +
                                  "</cache-container>" +
                                "</infinispan>"
                                , cacheName);
    manager.administration().getOrCreateCache(cacheName, new XMLStringConfiguration(xml));
    System.out.println("Cache with configuration exists or is created.");
}
Using Hot Rod client properties

When you invoke cacheManager.getCache() calls for named caches that do not exist, Infinispan creates them from the Hot Rod client properties instead of returning null.

Add cache configuration to Hot Rod client properties as in the following example:

# Add cache configuration
infinispan.client.hotrod.cache.my-cache.template_name=org.infinispan.DIST_SYNC
infinispan.client.hotrod.cache.another-cache.configuration=<infinispan><cache-container><distributed-cache name=\"another-cache\"/></cache-container></infinispan>
infinispan.client.hotrod.cache.my-other-cache.configuration_uri=file:/path/to/configuration.xml

11.6. Accessing the REST API

Infinispan provides a RESTful interface that you can interact with using HTTP clients.

Prerequisites
  • Expose Infinispan on the network so you can access the REST API.
    For example, configure a load balancer service or create a route.

Procedure
  • Access the REST API with any HTTP client at $SERVICE_HOSTNAME:$PORT/rest/v2.

    Replace $SERVICE_HOSTNAME:$PORT with the hostname and port where Infinispan is available on the network.

Additional resources

11.7. Adding Caches to Cache Service Nodes

Cache Service nodes include a default cache configuration with recommended settings. This default cache lets you start using Infinispan without the need to create caches.

Because the default cache provides recommended settings, you should create caches only as copies of the default. If you want multiple custom caches you should create Data Grid Service nodes instead of Cache Service nodes.

Procedure
  • Access the Infinispan Console and provide a copy of the default configuration in XML or JSON format.

  • Use the Infinispan CLI to create a copy from the default cache as follows:

    [//containers/default]> create cache --template=default mycache

11.7.1. Default Cache Configuration

The default cache for Cache Service nodes is as follows:

<infinispan>
  <cache-container>
    <distributed-cache name="default" (1)
                       mode="SYNC" (2)
                       owners="2"> (3)
      <memory storage="OFF_HEAP" (4)
              max-size="<maximum_size_in_bytes>" (5)
              when-full="REMOVE" /> (6)
      <partition-handling when-split="ALLOW_READ_WRITES" (7)
                          merge-policy="REMOVE_ALL"/> (8)
    </distributed-cache>
  </cache-container>
</infinispan>
1 Names the cache instance as "default".
2 Uses synchronous distribution for storing data across the cluster.
3 Configures two replicas of each cache entry on the cluster.
4 Stores cache entries as bytes in native memory (off-heap).
5 Defines the maximum size for the data container in bytes. Infinispan Operator calculates the maximum size when it creates nodes.
6 Evicts cache entries to control the size of the data container. You can enable automatic scaling so that Infinispan Operator adds nodes when memory usage increases instead of removing entries.
7 Names a conflict resolution strategy that allows read and write operations for cache entries, even if segment owners are in different partitions.
8 Specifies a merge policy that removes entries from the cache when Infinispan detects conflicts.

12. Monitoring Infinispan Services

Infinispan exposes metrics that can be used by Prometheus and Grafana for monitoring and visualizing the cluster state.

This documentation explains how to set up monitoring on OpenShift Container Platform. If you’re working with community Prometheus deployments, you might find these instructions useful as a general guide. However you should refer to the Prometheus documentation for installation and usage instructions.

See the Prometheus Operator documentation.

12.1. Creating a Prometheus Service Monitor

Create a Prometheus ServiceMonitor that scrapes metrics from a Infinispan cluster.

Prerequisites
  • Have an oc client.

  • Enable monitoring for user-defined projects on OpenShift Container Platform.

Procedure
  1. Create an authentication secret for the operator user.

    Prometheus needs credentials to authenticate with Infinispan and must use the operator user and corresponding password.

    1. Retrieve the operator user credentials.

      $ oc get secret example-infinispan-generated-operator-secret \
      -o jsonpath="{.data.identities\.yaml}" | base64 --decode
      
      credentials:
      - username: operator
        password: O9R95c56fI4WhGeW
    2. Create an authentication secret, for example:

      apiVersion: v1
      stringData:
        username: operator # The operator user.
        password: O9R95c56fI4WhGeW # Corresponding password.
      kind: Secret
      metadata:
        name: basic-auth
      type: Opaque
    3. Add the authentication secret to your Infinispan cluster namespace.

      $ oc apply -f basic-auth.yaml
  2. Create a Prometheus ServiceMonitor that scrapes Infinispan metrics.

    apiVersion: monitoring.coreos.com/v1
    kind: ServiceMonitor
    metadata:
      labels:
        k8s-app: prometheus
      # Specifies a name for the ServiceMonitor.
      # The name must be unique to each Infinispan cluster.
      # For simplicity, add the "-monitor" suffix to the Infinispan cluster name.
      name: example-infinispan-monitor
      # Specifies a namespace for the ServiceMonitor.
      namespace: ispn-namespace
    spec:
      endpoints:
        - port: infinispan-adm
          path: /metrics
          honorLabels: true
          basicAuth:
            username:
              key: username
              # Specifies the name of the authentication secret that holds credentials for the operator user.
              name: basic-auth
            password:
              key: password
              # Specifies the name of the authentication secret that holds credentials for the operator user.
              name: basic-auth
          interval: 30s
          scrapeTimeout: 10s
          scheme: http
      namespaceSelector:
        # Specifies the namespace where your Infinispan cluster runs.
        matchNames:
          - ispn-namespace
      selector:
        matchLabels:
          app: infinispan-service
          # Specifies the name of your Infinispan cluster.
          clusterName: example-infinispan
  3. Add the ServiceMonitor.

    $ oc apply -f service-monitor.yaml
Verification

You can check that Prometheus is scraping Infinispan metrics as follows:

  1. In the OpenShift Web Console, select the </> Developer perspective and then select Monitoring.

  2. Open the Dashboard tab for the namespace where your Infinispan cluster runs.

  3. Open the Metrics tab and confirm that you can query Infinispan metrics such as:

    vendor_cache_manager_default_cluster_size

12.2. Creating Grafana Data Sources

Create a GrafanaDatasource CR so you can visualize Infinispan metrics in Grafana dashboards.

Prerequisites
  • Have cluster-admin access to OpenShift Container Platform.

  • Create a Prometheus ServiceMonitor that scrapes Infinispan metrics.

  • Install the Grafana Operator from the alpha channel and create a Grafana CR.

Procedure
  1. Create a ServiceAccount that lets Grafana read Infinispan metrics from Prometheus.

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      # Names the service account for the Grafana data source.
      name: infinispan-monitoring
    1. Apply the ServiceAccount.

      $ oc apply -f service-account.yaml
    2. Grant cluster-monitoring-view permissions to the ServiceAccount.

      $ oc adm policy add-cluster-role-to-user cluster-monitoring-view -z infinispan-monitoring
  2. Create a Grafana data source.

    1. Retrieve the token for the ServiceAccount.

      $ oc serviceaccounts get-token infinispan-monitoring
      
      eyJhbGciOiJSUzI1NiIsImtpZCI6Imc4O...
    2. Define a GrafanaDataSource that includes the token as follows:

      apiVersion: integreatly.org/v1alpha1
      kind: GrafanaDataSource
      metadata:
        name: grafanadatasource
      spec:
        name: datasource.yaml
        datasources:
          - access: proxy
            editable: true
            isDefault: true
            jsonData:
              httpHeaderName1: Authorization
              timeInterval: 5s
              tlsSkipVerify: true
            name: Prometheus
            secureJsonData:
              # Specifies the token for the Grafana ServiceAccount.
              httpHeaderValue1: >-
                Bearer
                eyJhbGciOiJSUzI1NiIsImtpZCI6Imc4O...
            type: prometheus
            url: 'https://thanos-querier.openshift-monitoring.svc.cluster.local:9091'
  3. Apply the GrafanaDataSource.

    $ oc apply -f grafana-datasource.yaml
Next steps

Enable Grafana dashboards with the Infinispan Operator configuration properties.

12.3. Configuring Infinispan Dashboards

Infinispan Operator provides global configuration properties that let you configure Grafana dashboards for Infinispan clusters.

You can modify global configuration properties while Infinispan Operator is running.

Prerequisites
  • Infinispan Operator must watch the namespace where the Grafana Operator is running.

Procedure
  1. Create a ConfigMap named infinispan-operator-config in the Infinispan Operator namespace.

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: infinispan-operator-config
    data:
      # Specifies the namespace of a Infinispan cluster for which Infinispan Operator creates a Grafana dashboard.
      # Deleting the value removes the dashboard.
      # Changing the value moves the dashboard to that namespace.
      grafana.dashboard.namespace: example-infinispan
      # Names the dashboard.
      grafana.dashboard.name: infinispan
      # Lables the Dashboard CR resource.
      grafana.dashboard.monitoring.key: middleware
  2. Specify a namespace with the grafana.dashboard.namespace property.

  3. Specify values for other data.grafana.dashboard properties as required.

  4. Create infinispan-operator-config or update the configuration.

    $ oc apply -f infinispan-operator-config.yaml
  5. Open the Grafana UI, which is available at:

    $ oc get routes grafana-route -o jsonpath=https://"{.spec.host}"

13. Monitoring Infinispan Logs

Set logging categories to different message levels to monitor, debug, and troubleshoot Infinispan clusters.

13.1. Configuring Infinispan Logging

Procedure
  1. Specify logging configuration with spec.logging in your Infinispan CR and then apply the changes.

    spec:
      ...
      logging: (1)
        categories: (2)
          org.infinispan: debug (3)
          org.jgroups: debug
    1 Configures Infinispan logging.
    2 Adds logging categories.
    3 Names logging categories and levels.

    The root logging category is org.infinispan and is INFO by default.

  2. Retrieve logs from Infinispan nodes as required.

    $ kubectl logs -f $POD_NAME

13.2. Log Levels

Log levels indicate the nature and severity of messages.

Log level Description

trace

Provides detailed information about running state of applications. This is the most verbose log level.

debug

Indicates the progress of individual requests or activities.

info

Indicates overall progress of applications, including lifecycle events.

warn

Indicates circumstances that can lead to error or degrade performance.

error

Indicates error conditions that might prevent operations or activities from being successful but do not prevent applications from running.

14. Backing Up and Restoring Infinispan Clusters

Infinispan Operator watches for custom resources (CR) that let you back up and restore Infinispan cluster state for disaster recovery or when migrating between Infinispan versions.

Backup CR

Archives Infinispan cluster content to a persistent volume.

Restore CR

Restores archived content to a Infinispan cluster.

14.1. Backing Up Infinispan Clusters

Create a backup file that stores Infinispan cluster state to a persistent volume.

Prerequisites
  • Create an Infinispan CR of spec.service.type: DataGrid.

  • Have some resources on your Infinispan cluster to back up. Backups archive all resources that the Cache Manager controls, including caches, cache entries, cache templates, Protobuf schema, counters, scripts, and so on.

Infinispan backups do not provide snapshot isolation. If a write operation occurs on a cache entry that the backup operation has already archived, that write might not be backed up. To ensure that you archive the exact state of the cluster, make sure there are no active client connections to the cluster before you back it up.

Procedure
  1. Create a Backup CR.

    For example, create my-backup.yaml with the following:

    apiVersion: infinispan.org/v2alpha1
    kind: Backup (1)
    metadata:
      name: my-backup (2)
    spec:
      cluster: source-cluster (3)
    1 Specifies a Backup CR.
    2 Names the backup file.
    3 Specifies the name of the Infinispan cluster you want to backup.
  2. Add the spec.resources field to back up certain resources only.

    spec:
      ...
      resources:
        templates: (1)
          - distributed-sync-prod
          - distributed-sync-dev
        caches: (2)
          - cache-one
          - cache-two
        counters: (3)
          - counter-name
        protoSchemas: (4)
          - authors.proto
          - books.proto
        tasks: (5)
          - wordStream.js
    1 Cache templates.
    2 Caches by name.
    3 Counters by name.
    4 Protobuf schemas for querying.
    5 Custom server tasks.
  3. Apply your Backup CR.

    $ kubectl apply -f my-backup.yaml

    A new pod joins the Infinispan cluster and creates the backup file. When the operation is complete, the pod leaves the cluster and logs the following message:

    ISPN005044: Backup file created 'my-backup.zip'

    The resulting backup file is stored in the /opt/infinispan/backups directory.

  4. Run the following command to verify that the backup is successful:

    $ kubectl describe Backup my-backup -n namespace

14.2. Restoring Infinispan Clusters

Restore Infinispan cluster state from a backup archive.

Prerequisites
  • Create a Backup CR on a source cluster.

  • Create a Infinispan cluster of Data Grid Service nodes where you want to restore state.

    Make sure there are no active client connections to the cluster before you restore the backup. Cache entries that you restore from a backup could overwrite more recent cache entries. For example, a client does cache.put(k=2) before you restore a backup that contains k=1.

Procedure
  1. Create a Restore CR.

    For example, create my-restore.yaml with the following:

    apiVersion: infinispan.org/v2alpha1
    kind: Restore (1)
    metadata:
      name: my-restore (2)
    spec:
      backup: my-backup (3)
      cluster: target-cluster (4)
    1 Specifies a Restore CR.
    2 Provides a unique name for the Restore CR.
    3 Specifies the name of the Backup CR.
    4 Specifies the name of the Infinispan CR.
  2. Add the spec.resources field to restore specific resources only.

    spec:
      ...
      resources:
        templates: (1)
          - distributed-sync-prod
          - distributed-sync-dev
        caches: (2)
          - cache-one
          - cache-two
        counters: (3)
          - counter-name
        protoSchemas: (4)
          - authors.proto
          - books.proto
        tasks: (5)
          - wordStream.js
    1 Cache templates.
    2 Caches by name.
    3 Counters by name.
    4 Protobuf schemas for querying.
    5 Custom server tasks.
  3. Apply your Restore CR.

    $ kubectl apply -f my-restore.yaml

    A new pod joins the Infinispan cluster and restores state from the backup file. When the operation is complete, the pod leaves the cluster and logs the following message:

    ISPN005045: Restore 'my-backup' complete
  4. Open the Infinispan Console or establish a CLI connection to verify the caches and data are restored to the cluster.

15. Guaranteeing Availability with Anti-Affinity

Kubernetes includes anti-affinity capabilities that protect workloads from single points of failure.

15.1. Anti-Affinity Strategies

Each Infinispan node in a cluster runs in a pod that runs on an Kubernetes node in a cluster. Each Red Hat OpenShift node runs on a physical host system. Anti-affinity works by distributing Infinispan nodes across Kubernetes nodes, ensuring that your Infinispan clusters remain available even if hardware failures occur.

Infinispan Operator offers two anti-affinity strategies:

kubernetes.io/hostname

Infinispan replica pods are scheduled on different Kubernetes nodes.

topology.kubernetes.io/zone

Infinispan replica pods are scheduled across multiple zones.

Fault tolerance

Anti-affinity strategies guarantee cluster availability in different ways.

The equations in the following section apply only if the number of Kubernetes nodes or zones is greater than the number of Infinispan nodes.

Scheduling pods on different Kubernetes nodes

Provides tolerance of x node failures for the following types of cache:

  • Replicated: x = spec.replicas - 1

  • Distributed: x = num_owners - 1

Scheduling pods across multiple zones

Provides tolerance of x zone failures when x zones exist for the following types of cache:

  • Replicated: x = spec.replicas - 1

  • Distributed: x = num_owners - 1

spec.replicas

Defines the number of pods in each Infinispan cluster.

num_owners

Is the cache configuration attribute that defines the number of replicas for each entry in the cache.

15.2. Configuring Anti-Affinity

Specify where Kubernetes schedules pods for your Infinispan clusters to ensure availability.

Procedure
  1. Add the spec.affinity block to your Infinispan CR.

  2. Configure anti-affinity strategies as necessary.

  3. Apply your Infinispan CR.

15.3. Anti-Affinity Strategy Configurations

Configure anti-affinity strategies in your Infinispan CR to control where Kubernetes schedules Infinispan replica pods.

Schedule pods on different Kubernetes nodes

The following is the anti-affinity strategy that Infinispan Operator uses if you do not configure the spec.affinity field in your Infinispan CR:

spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100 (1)
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app: infinispan-pod
              clusterName: <cluster_name>
              infinispan_cr: <cluster_name>
          topologyKey: "kubernetes.io/hostname" (2)
1 Sets the hostname strategy as most preferred.
2 Schedules Infinispan replica pods on different Kubernetes nodes.
Requiring different nodes
spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution: (1)
      - labelSelector:
          matchLabels:
            app: infinispan-pod
            clusterName: <cluster_name>
            infinispan_cr: <cluster_name>
        topologyKey: "topology.kubernetes.io/hostname"
1 Kubernetes does not schedule Infinispan pods if there are no different nodes available.

To ensure that you can schedule Infinispan replica pods on different Kubernetes nodes, the number of Kubernetes nodes available must be greater than the value of spec.replicas.

Schedule pods across multiple Kubernetes zones

The following example prefers multiple zones when scheduling pods:

spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100 (1)
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app: infinispan-pod
              clusterName: <cluster_name>
              infinispan_cr: <cluster_name>
          topologyKey: "topology.kubernetes.io/zone" (2)
      - weight: 90 (3)
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app: infinispan-pod
              clusterName: <cluster_name>
              infinispan_cr: <cluster_name>
          topologyKey: "kubernetes.io/hostname" (4)
1 Sets the zone strategy as most preferred.
2 Schedules Infinispan replica pods across multiple zones.
3 Sets the hostname strategy as next preferred.
4 Schedules Infinispan replica pods on different Kubernetes nodes if it is not possible to schedule across zones.
Requiring multiple zones
spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution: (1)
      - labelSelector:
          matchLabels:
            app: infinispan-pod
            clusterName: <cluster_name>
            infinispan_cr: <cluster_name>
        topologyKey: "topology.kubernetes.io/zone"
1 Uses the zone strategy only when scheduling Infinispan replica pods.

16. Deploying Custom Code to Infinispan Clusters

Add custom code, such as scripts and event listeners, to your Infinispan clusters.

16.1. Copying Code Artifacts

Before you can deploy custom code to Infinispan clusters, you need to make it available to pods. To do this you create a temporary pod that loads your code artifacts into a persistent volume claim (PVC).

The steps in this procedure offer a solution for making code artifacts available to Infinispan clusters. However there are several ways you can do this. You can adapt these steps as needed or use any alternative methods that you might have in place to copy code into PVCs.

Procedure
  1. Create a PVC with ReadOnlyMany or ReadWriteMany access mode.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: datagrid-libs
    spec:
      accessModes:
        - ReadOnlyMany
      resources:
        requests:
          storage: 100Mi
  2. Change to the namespace for your Infinispan cluster.

    $ kubectl config set-context --current --namespace=ispn-namespace
  3. Apply your PVC.

    $ kubectl apply -f datagrid-libs.yaml
  4. Create a pod that mounts the PVC, for example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: datagrid-libs-pod
    spec:
      volumes:
        - name: lib-pv-storage
          persistentVolumeClaim:
            claimName: datagrid-libs
      containers:
        - name: lib-pv-container
          image: quay.io/infinispan/server:12.0
          volumeMounts:
            - mountPath: /tmp/libs
              name: lib-pv-storage
  5. Add the pod to the Infinispan namespace and wait for it to be ready.

    $ kubectl apply -f datagrid-libs-pod.yaml
    $ kubectl wait --for=condition=ready --timeout=2m pod/datagrid-libs-pod
  6. Copy your code artifacts to the pod so that they are loaded into the PVC.

    For example to copy code artifacts from a local libs directory, do the following:

    $ kubectl cp --no-preserve=true libs datagrid-libs-pod:/tmp/
  7. Delete the pod.

    $ kubectl delete pod datagrid-libs-pod

16.2. Deploying Custom Code

Make your custom code available to Infinispan clusters.

Prerequisites
  • Create a persistent volume claim (PVC) and copy your code artifacts to it.

Procedure
  • Specify the persistent volume with spec.dependencies.VolumeClaimName in your Infinispan CR and then apply the changes.

    apiVersion: infinispan.org/v1
    kind: Infinispan
    metadata:
      name: example-infinispan
    spec:
      replicas: 2
      dependencies:
        # Names the persistent volume claim that contains custom code.
        volumeClaimName: datagrid-libs
      service:
        type: DataGrid

17. Reference

Find information about Infinispan services and clusters that you create with Infinispan Operator.

17.1. Image Resource

spec:
  image: quay.io/infinispan/server:latest (1)
1 Lets you specify a Infinispan image to use.

17.2. Network Services

Internal service
  • Allow Infinispan nodes to discover each other and form clusters.

  • Provide access to Infinispan endpoints from clients in the same Kubernetes namespace.

Service Port Protocol Description

<cluster_name>

11222

TCP

Internal access to Infinispan endpoints

<cluster_name>-ping

8888

TCP

Cluster discovery

External service

Provides access to Infinispan endpoints from clients outside Kubernetes or in different namespaces.

You must create the external service with Infinispan Operator. It is not available by default.

Service Port Protocol Description

<cluster_name>-external

11222

TCP

External access to Infinispan endpoints.

Cross-site service

Allows Infinispan to back up data between clusters in different locations.

Service Port Protocol Description

<cluster_name>-site

7900

TCP

JGroups RELAY2 channel for cross-site communication.

Additional resources

Creating Network Services