The Infinispan Operator provides operational intelligence and reduces management complexity for deploying Infinispan on Kubernetes and Red Hat OpenShift.

Infinispan Operator 2.1 corresponds to Infinispan 12.1.

1. Installing Infinispan Operator

Install Infinispan Operator into a Kubernetes namespace to create and manage Infinispan clusters.

1.1. Installing Infinispan Operator on Red Hat OpenShift

Create subscriptions to Infinispan Operator on OpenShift so you can install different Infinispan versions and receive automatic updates.

Automatic updates apply to Infinispan Operator first and then for each Infinispan node. Infinispan Operator updates clusters one node at a time, gracefully shutting down each node and then bringing it back online with the updated version before going on to the next node.

Prerequisites
  • Access to OperatorHub running on OpenShift. Some OpenShift environments, such as OpenShift Container Platform, can require administrator credentials.

  • Have an OpenShift project for Infinispan Operator if you plan to install it into a specific namespace.

Procedure
  1. Log in to the OpenShift Web Console.

  2. Navigate to OperatorHub.

  3. Find and select Infinispan Operator.

  4. Select Install and continue to Create Operator Subscription.

  5. Specify options for your subscription.

    Installation Mode

    You can install Infinispan Operator into a Specific namespace or All namespaces.

    Update Channel

    Subscribe to updates for Infinispan Operator versions.

    Approval Strategies

    When new Infinispan versions become available, you can install updates manually or let Infinispan Operator install them automatically.

  6. Select Subscribe to install Infinispan Operator.

  7. Navigate to Installed Operators to verify the Infinispan Operator installation.

1.2. Installing Infinispan Operator from OperatorHub.io

Use the command line to install Infinispan Operator from OperatorHub.io.

Prerequisites
  • OKD 3.11 or later.

  • Kubernetes 1.11 or later.

  • Have administrator access on the Kubernetes cluster.

  • Have a kubectl or oc client.

Procedure
  1. Navigate to the Infinispan Operator entry on OperatorHub.io.

  2. Follow the instructions to install Infinispan Operator into your Kubernetes cluster.

1.3. Building and installing Infinispan Operator manually

Manually build and install Infinispan Operator from the GitHub repository.

Procedure

1.4. Infinispan cluster upgrades

Infinispan Operator can automatically upgrade Infinispan clusters when new versions become available. You can also perform upgrades manually if you prefer to control when they occur.

Infinispan Operator requires the Operator Lifecycle Manager to perform cluster upgrades.

Upgrade notifications

If you upgrade Infinispan clusters manually and have upgraded the channel for your Infinispan Operator subscription from 2.0.x to 2.1.x you should apply the upgrade for the latest Infinispan 12.x version as soon as possible. This upgrade avoids potential data loss that can occur in earlier versions with ISPN-13116.

2. Getting started with Infinispan CR

After you install Infinispan Operator, learn how to create Infinispan clusters on Kubernetes.

2.1. Infinispan custom resource (CR)

Infinispan Operator adds a new Custom Resource (CR) of type Infinispan that lets you handle Infinispan clusters as complex units on Kubernetes.

Infinispan Operator watches for Infinispan Custom Resources (CR) that you use to instantiate and configure Infinispan clusters and manage Kubernetes resources, such as StatefulSets and Services. In this way, the Infinispan CR is your primary interface to Infinispan on Kubernetes.

The minimal Infinispan CR is as follows:

apiVersion: infinispan.org/v1
kind: Infinispan
metadata:
  name: example-infinispan
spec:
  replicas: 2
Field Description

apiVersion

Declares the version of the Infinispan API.

kind

Declares the Infinispan CR.

metadata.name

Specifies a name for your Infinispan cluster.

spec.replicas

Specifies the number of pods in your Infinispan cluster.

2.2. Creating Infinispan clusters

Use Infinispan Operator to create clusters of two or more Infinispan pods.

Prerequisites
  • Install Infinispan Operator.

  • Have an oc or a kubectl client.

Procedure
  1. Specify the number of Infinispan pods in the cluster with spec.replicas in your Infinispan CR.

    For example, create a cr_minimal.yaml file as follows:

    $ cat > cr_minimal.yaml<<EOF
    apiVersion: infinispan.org/v1
    kind: Infinispan
    metadata:
      name: example-infinispan
    spec:
      replicas: 2
    EOF
  2. Apply your Infinispan CR.

    $ kubectl apply -f cr_minimal.yaml
  3. Watch Infinispan Operator create the Infinispan pods.

    $ kubectl get pods -w
    
    NAME                        READY  STATUS              RESTARTS   AGE
    example-infinispan-1        0/1    ContainerCreating   0          4s
    example-infinispan-2        0/1    ContainerCreating   0          4s
    example-infinispan-3        0/1    ContainerCreating   0          5s
    infinispan-operator-0       1/1    Running             0          3m
    example-infinispan-3        1/1    Running             0          8s
    example-infinispan-2        1/1    Running             0          8s
    example-infinispan-1        1/1    Running             0          8s
Next Steps

Try changing the value of replicas: and watching Infinispan Operator scale the cluster up or down.

2.3. Verifying Infinispan clusters

Check that Infinispan pods have successfully formed clusters.

Procedure
  • Retrieve the Infinispan CR for Infinispan Operator.

    $ kubectl get infinispan -o yaml

    The response indicates that Infinispan pods have received clustered views, as in the following example:

    conditions:
      - message: 'View: [example-infinispan-0, example-infinispan-1]'
        status: "True"
        type: wellFormed

Do the following for automated scripts:

$ kubectl wait --for condition=wellFormed --timeout=240s infinispan/example-infinispan

Alternatively, you can retrieve cluster view from logs as follows:

$ kubectl logs example-infinispan-0 | grep ISPN000094

INFO  [org.infinispan.CLUSTER] (MSC service thread 1-2) \
ISPN000094: Received new cluster view for channel infinispan: \
[example-infinispan-0|0] (1) [example-infinispan-0]

INFO  [org.infinispan.CLUSTER] (jgroups-3,example-infinispan-0) \
ISPN000094: Received new cluster view for channel infinispan: \
[example-infinispan-0|1] (2) [example-infinispan-0, example-infinispan-1]

2.4. Stopping and starting Infinispan clusters

Stop and start Infinispan pods in a graceful, ordered fashion to correctly preserve cluster state.

Clusters of Data Grid Service pods must restart with the same number of pods that existed before shutdown. This allows Infinispan to restore the distribution of data across the cluster. After Infinispan Operator fully restarts the cluster you can safely add and remove pods.

Procedure
  1. Change the spec.replicas field to 0 to stop the Infinispan cluster.

    spec:
      replicas: 0
  2. Ensure you have the correct number of pods before you restart the cluster.

    $ kubectl get infinispan example-infinispan -o=jsonpath='{.status.replicasWantedAtRestart}'
  3. Change the spec.replicas field to the same number of pods to restart the Infinispan cluster.

    spec:
      replicas: 6

3. Setting up Infinispan services

Use Infinispan Operator to create clusters of either Cache Service or Data Grid Service pods.

If you do not specify a value for the spec.service.type field, Infinispan Operator creates Cache Service pods by default.

You cannot change the spec.service.type field after you create pods. To change the service type, you must delete the existing pods and create new ones.

3.1. Service types

Services are stateful applications, based on the Infinispan Server image, that provide flexible and robust in-memory data storage.

3.1.1. Data Grid Service

Deploy clusters of Data Grid Service pods if you want to:

  • Back up data across global clusters with cross-site replication.

  • Create caches with any valid configuration.

  • Add file-based cache stores to save data in a persistent volume.

  • Query values across caches using the Infinispan Query API.

  • Use advanced Infinispan features and capabilities.

3.1.2. Cache Service

Deploy clusters of Cache Service pods if you want a low-latency data store with minimal configuration.

Cache Service pods provide volatile storage only, which means you lose all data when you modify your Infinispan CR or update the version of your Infinispan cluster. However, if you only want to quickly provide applications with high-performance caching without the overhead of configuration then you can use Cache Service pods to:

  • Automatically scale to meet capacity when data storage demands go up or down.

  • Synchronously distribute data to ensure consistency.

  • Replicates each entry in the cache across the cluster.

  • Store cache entries off-heap and use eviction for JVM efficiency.

  • Ensure data consistency with a default partition handling configuration.

The Infinispan team recommend that you deploy Data Grid Service pods instead of Cache Service pods.

Cache Service is planned for removal in the next version of the Infinispan CRD. Data Grid Service remains under active development and will continue to benefit from new features and improved tooling to automate complex operations such as upgrading clusters and migrating data.

3.2. Creating Data Grid Service pods

To use custom cache definitions along with Infinispan capabilities such as cross-site replication, create clusters of Data Grid Service pods.

Procedure
  1. Create an Infinispan CR that sets spec.service.type: DataGrid and configures any other Data Grid Service resources.

    apiVersion: infinispan.org/v1
    kind: Infinispan
    metadata:
      name: example-infinispan
    spec:
      replicas: 2
      service:
        type: DataGrid
  2. Apply your Infinispan CR to create the cluster.

3.2.1. Data Grid Service CR

This topic describes the Infinispan CR for Data Grid Service pods.

apiVersion: infinispan.org/v1
kind: Infinispan
metadata:
  name: example-infinispan
  annotations:
    infinispan.org/monitoring: 'true'
spec:
  replicas: 6
  service:
    type: DataGrid
    container:
      storage: 2Gi
      ephemeralStorage: false
      storageClassName: my-storage-class
    sites:
      local:
      name: azure
      expose:
        type: LoadBalancer
      locations:
      - name: azure
        url: openshift://api.azure.host:6443
        secretName: azure-token
      - name: aws
        clusterName: example-infinispan
        namespace: ispn-namespace
        url: openshift://api.aws.host:6443
        secretName: aws-token
  security:
    endpointSecretName: endpoint-identities
    endpointEncryption:
        type: Secret
        certSecretName: tls-secret
  container:
    extraJvmOpts: "-XX:NativeMemoryTracking=summary"
    cpu: "1000m"
    memory: 1Gi
  logging:
    categories:
      org.infinispan: debug
      org.jgroups: debug
      org.jgroups.protocols.TCP: error
      org.jgroups.protocols.relay.RELAY2: error
  expose:
    type: LoadBalancer
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app: infinispan-pod
              clusterName: example-infinispan
              infinispan_cr: example-infinispan
          topologyKey: "kubernetes.io/hostname"
Field Description

metadata.name

Names your Infinispan cluster.

metadata.annotations.infinispan.org/monitoring

Automatically creates a ServiceMonitor for your cluster.

spec.replicas

Specifies the number of pods in your cluster.

spec.service.type

Configures the type Infinispan service. A value of DataGrid creates a cluster with Data Grid Service pods.

spec.service.container

Configures the storage resources for Data Grid Service pods.

spec.service.sites

Configures cross-site replication.

spec.security.endpointSecretName

Specifies an authentication secret that contains Infinispan user credentials.

spec.security.endpointEncryption

Specifies TLS certificates and keystores to encrypt client connections.

spec.container

Specifies JVM, CPU, and memory resources for Infinispan pods.

spec.logging

Configures Infinispan logging categories.

spec.expose

Controls how Infinispan endpoints are exposed on the network.

spec.affinity

Configures anti-affinity strategies that guarantee Infinispan availability.

3.3. Creating Cache Service pods

Create Infinispan clusters with Cache Service pods for a volatile, low-latency data store with minimal configuration.

Procedure
  1. Create an Infinispan CR that sets spec.service.type: Cache and configures any other Cache Service resources.

    apiVersion: infinispan.org/v1
    kind: Infinispan
    metadata:
      name: example-infinispan
    spec:
      replicas: 2
      service:
        type: Cache
  2. Apply your Infinispan CR to create the cluster.

3.3.1. Cache Service CR

This topic describes the Infinispan CR for Cache Service pods.

apiVersion: infinispan.org/v1
kind: Infinispan
metadata:
  name: example-infinispan
  annotations:
    infinispan.org/monitoring: 'true'
spec:
  replicas: 2
  service:
    type: Cache
    replicationFactor: 2
  autoscale:
    maxMemUsagePercent: 70
    maxReplicas: 5
    minMemUsagePercent: 30
    minReplicas: 2
  security:
    endpointSecretName: endpoint-identities
    endpointEncryption:
        type: Secret
        certSecretName: tls-secret
  container:
    extraJvmOpts: "-XX:NativeMemoryTracking=summary"
    cpu: "2000m"
    memory: 1Gi
  logging:
    categories:
      org.infinispan: trace
      org.jgroups: trace
  expose:
    type: LoadBalancer
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app: infinispan-pod
              clusterName: example-infinispan
              infinispan_cr: example-infinispan
          topologyKey: "kubernetes.io/hostname"
Field Description

metadata.name

Names your Infinispan cluster.

metadata.annotations.infinispan.org/monitoring

Automatically creates a ServiceMonitor for your cluster.

spec.replicas

Specifies the number of pods in your cluster. If you enable autoscaling capabilities, this field specifies the initial number of pods.

spec.service.type

Configures the type Infinispan service. A value of Cache creates a cluster with Cache Service pods.

spec.service.replicationFactor

Sets the number of copies for each entry across the cluster. The default for Cache Service pods is two, which replicates each cache entry to avoid data loss.

spec.autoscale

Enables and configures automatic scaling.

spec.security.endpointSecretName

Specifies an authentication secret that contains Infinispan user credentials.

spec.security.endpointEncryption

Specifies TLS certificates and keystores to encrypt client connections.

spec.container

Specifies JVM, CPU, and memory resources for Infinispan pods.

spec.logging

Configures Infinispan logging categories.

spec.expose

Controls how Infinispan endpoints are exposed on the network.

spec.affinity

Configures anti-affinity strategies that guarantee Infinispan availability.

3.4. Automatic scaling

Infinispan Operator can monitor the default cache on Cache Service pods to automatically scale clusters up or down, by creating or deleting pods based on memory usage.

Automatic scaling is available for clusters of Cache Service pods only. Infinispan Operator does not perform automatic scaling for clusters of Data Grid Service pods.

When you enable automatic scaling, you define memory usage thresholds that let Infinispan Operator determine when it needs to create or delete pods. Infinispan Operator monitors statistics for the default cache and, when memory usage reaches the configured thresholds, scales your clusters up or down.

Maximum threshold

This threshold sets an upper boundary for the amount of memory that pods in your cluster can use before scaling up or performing eviction. When Infinispan Operator detects that any node reaches the maximum amount of memory that you configure, it creates a new node if possible. If Infinispan Operator cannot create a new node then it performs eviction when memory usage reaches 100 percent.

Minimum threshold

This threshold sets a lower boundary for memory usage across your Infinispan cluster. When Infinispan Operator detects that memory usage falls below the minimum, it shuts down pods.

Default cache only

Autoscaling capabilities work with the default cache only. If you plan to add other caches to your cluster, you should not include the autoscale field in your Infinispan CR. In this case you should use eviction to control the size of the data container on each node.

3.4.1. Configuring automatic scaling

If you create clusters with Cache Service pods, you can configure Infinispan Operator to automatically scale clusters.

Procedure
  1. Add the spec.autoscale resource to your Infinispan CR to enable automatic scaling.

    Set a value of true for the autoscale.disabled field to disable automatic scaling.

  2. Configure thresholds for automatic scaling with the following fields:

    Field Description

    spec.autoscale.maxMemUsagePercent

    Specifies a maximum threshold, as a percentage, for memory usage on each node.

    spec.autoscale.maxReplicas

    Specifies the maximum number of Cache Service pods for the cluster.

    spec.autoscale.minMemUsagePercent

    Specifies a minimum threshold, as a percentage, for cluster memory usage.

    spec.autoscale.minReplicas

    Specifies the minimum number of Cache Service pods for the cluster.

    For example, add the following to your Infinispan CR:

    spec:
      service:
        type: Cache
      autoscale:
        disabled: false
        maxMemUsagePercent: 70
        maxReplicas: 5
        minMemUsagePercent: 30
        minReplicas: 2
  3. Apply the changes.

3.5. Allocating storage resources

You can allocate storage for Data Grid Service pods but not Cache Service pods.

By default, Infinispan Operator allocates 1Gi for the persistent volume claim. However you should adjust the amount of storage available to Data Grid Service pods so that Infinispan can preserve cluster state during shutdown.

If available container storage is less than the amount of available memory, data loss can occur.

Procedure
  1. Allocate storage resources with the spec.service.container.storage field.

  2. Optionally configure the ephemeralStorage and storageClassName fields as required.

    spec:
      service:
        type: DataGrid
        container:
          storage: 2Gi
          ephemeralStorage: false
          storageClassName: my-storage-class
  3. Apply the changes.

Field Description

spec.service.container.storage

Specifies the amount of storage for Data Grid Service pods.

spec.service.container.ephemeralStorage

Defines whether storage is ephemeral or permanent. Set the value to true to use ephemeral storage, which means all data in storage is deleted when clusters shut down or restart. The default value is false, which means storage is permanent.

spec.service.container.storageClassName

Specifies the name of a StorageClass object to use for the persistent volume claim (PVC). If you include this field, you must specify an existing storage class as the value. If you do not include this field, the persistent volume claim uses the storage class that has the storageclass.kubernetes.io/is-default-class annotation set to true.

3.5.1. Persistent volume claims

Infinispan Operator creates a persistent volume claim (PVC) and mounts container storage at:
/opt/infinispan/server/data

Caches

When you create caches, Infinispan permanently stores their configuration so your caches are available after cluster restarts. This applies to both Cache Service and Data Grid Service pods.

Data

Data is always volatile in clusters of Cache Service pods. When you shutdown the cluster, you permanently lose the data.

Use a file-based cache store, by adding the <file-store/> element to your Infinispan cache configuration, if you want Data Grid Service pods to persist data during cluster shutdown.

3.6. JVM, CPU, and memory

You can set JVM options in Infinispan CR as well as CPU and memory allocation.

spec:
  container:
    extraJvmOpts: "-XX:NativeMemoryTracking=summary"
    cpu: "1000m"
    memory: 1Gi
Field Description

spec.container.extraJvmOpts

Specifies JVM options.

spec.container.cpu

Allocates host CPU resources to Infinispan pods, measured in CPU units.

spec.container.memory

Allocates host memory to Infinispan pods, measured in bytes.

When Infinispan Operator creates Infinispan clusters, it uses spec.container.cpu and spec.container.memory to:

  • Ensure that Kubernetes has sufficient capacity to run the Infinispan node. By default Infinispan Operator requests 512Mi of memory and 0.5 cpu from the Kubernetes scheduler.

  • Constrain node resource usage. Infinispan Operator sets the values of cpu and memory as resource limits.

3.7. Adjusting log levels

Change levels for different Infinispan logging categories when you need to debug issues. You can also adjust log levels to reduce the number of messages for certain categories to minimize the use of container resources.

Procedure
  1. Configure Infinispan logging with the spec.logging.categories field in your Infinispan CR.

    spec:
      logging:
        categories:
          org.infinispan: debug
          org.jgroups: debug
  2. Apply the changes.

  3. Retrieve logs from Infinispan pods as required.

    $ kubectl logs -f $POD_NAME

3.7.1. Logging reference

Find information about log categories and levels.

Table 1. Log categories
Root category Description Default level

org.infinispan

Infinispan messages

info

org.jgroups

Cluster transport messages

info

Table 2. Log levels
Log level Description

trace

Provides detailed information about running state of applications. This is the most verbose log level.

debug

Indicates the progress of individual requests or activities.

info

Indicates overall progress of applications, including lifecycle events.

warn

Indicates circumstances that can lead to error or degrade performance.

error

Indicates error conditions that might prevent operations or activities from being successful but do not prevent applications from running.

Garbage collection (GC) messages

Infinispan Operator does not log GC messages by default. You can direct GC messages to stdout with the following JVM options:

extraJvmOpts: "-Xlog:gc*:stdout:time,level,tags"

3.8. Specifying Infinispan Server images

Specify which Infinispan Server image Infinispan Operator should use to create pods with the spec.image field.

spec:
  image: quay.io/infinispan/server:latest

3.9. Adding labels to Infinispan resources

Attach key/value labels to pods and services that Infinispan Operator creates and manages. These labels help you identify relationships between objects to better organize and monitor Infinispan resources.

Procedure
  1. Open your Infinispan CR for editing.

  2. Add any labels that you want Infinispan Operator to attach to resources with metadata.annotations.

  3. Add values for your labels with metadata.labels.

    1. Specify labels that you want to attach to services with the metadata.annotations.infinispan.org/targetLabels field.

    2. Specify labels that you want to attach to pods with the metadata.annotations.infinispan.org/podTargetLabels field.

    3. Define values for your labels with the metadata.labels fields.

      apiVersion: infinispan.org/v1
      kind: Infinispan
      metadata:
        annotations:
          infinispan.org/targetLabels: svc-label1, svc-label2
          infinispan.org/podTargetLabels: pod-label1, pod-label2
        labels:
          svc-label1: svc-value1
          svc-label2: svc-value2
          pod-label1: pod-value1
          pod-label2: pod-value2
          # The operator does not attach these labels to resources.
          my-label: my-value
          environment: development
  4. Apply your Infinispan CR.

3.9.1. Global labels for Infinispan Operator

Global labels are automatically propagated to all Infinispan pods and services.

You can add and modify global labels for Infinispan Operator with the env field in the operator yaml.

# Defines global labels for services.
- name: INFINISPAN_OPERATOR_TARGET_LABELS
  value: |
    {"svc-label1":"svc-value1",
     "svc-label2":"svc-value2"}
# Defines global labels for pods.
- name: INFINISPAN_OPERATOR_POD_TARGET_LABELS
  value: |
    {"pod-label1":"pod-value1",
     "pod-label2":"pod-value2"}

4. Configuring authentication

Application users need credentials to access Infinispan clusters. You can use default, generated credentials or add your own.

4.1. Default credentials

Infinispan Operator generates base64-encoded default credentials stored in an authentication secrets named

Username Secret name Description

developer

example-infinispan-generated-secret

Credentials for the default application user.

operator

example-infinispan-generated-operator-secret

Credentials that Infinispan Operator uses to interact with Infinispan resources.

4.2. Retrieving credentials

Get credentials from authentication secrets to access Infinispan clusters.

Procedure
  • Retrieve credentials from authentication secrets.

    $ kubectl get secret example-infinispan-generated-secret

    Base64-decode credentials.

    $ kubectl get secret example-infinispan-generated-secret \
    -o jsonpath="{.data.identities\.yaml}" | base64 --decode
    
    credentials:
    - username: developer
      password: dIRs5cAAsHIeeRIL

4.3. Adding custom user credentials

Configure access to Infinispan cluster endpoints with custom credentials.

Modifying spec.security.endpointSecretName triggers a cluster restart.

Procedure
  1. Create an identities.yaml file with the credentials that you want to add.

    credentials:
    - username: myfirstusername
      password: changeme-one
    - username: mysecondusername
      password: changeme-two
  2. Create an authentication secret from identities.yaml.

    $ kubectl create secret generic --from-file=identities.yaml connect-secret
  3. Specify the authentication secret with spec.security.endpointSecretName in your Infinispan CR and then apply the changes.

    spec:
      security:
        endpointSecretName: connect-secret

4.4. Changing the operator password

You can change the password for the operator user if you do not want to use the automatically generated password.

Procedure
  • Update the password key in the example-infinispan-generated-operator-secret secret as follows:

    kubectl patch secret example-infinispan-generated-operator-secret -p='{"stringData":{"password": "supersecretoperatorpassword"}}'

    You should update only the password key in the generated-operator-secret secret. When you update the password, Infinispan Operator automatically refreshes other keys in that secret.

4.5. Disabling user authentication

Allow users to access Infinispan clusters and manipulate data without providing credentials.

Do not disable authentication if endpoints are accessible from outside the Kubernetes cluster via spec.expose.type. You should disable authentication for development environments only.

Procedure
  1. Set false as the value for the spec.security.endpointAuthentication field in your Infinispan CR.

    spec:
      security:
        endpointAuthentication: false
  2. Apply the changes.

5. Configuring client certificate authentication

Add client trust stores to your project and configure Infinispan to allow connections only from clients that present valid certificates. This increases security of your deployment by ensuring that clients are trusted by a public certificate authority (CA).

5.1. Client certificate authentication

Client certificate authentication restricts in-bound connections based on the certificates that clients present.

You can configure Infinispan to use trust stores with either of the following strategies:

Validate

To validate client certificates, Infinispan requires a trust store that contains any part of the certificate chain for the signing authority, typically the root CA certificate. Any client that presents a certificate signed by the CA can connect to Infinispan.

If you use the Validate strategy for verifying client certificates, you must also configure clients to provide valid Infinispan credentials if you enable authentication.

Authenticate

Requires a trust store that contains all public client certificates in addition to the root CA certificate. Only clients that present a signed certificate can connect to Infinispan.

If you use the Authenticate strategy for verifying client certificates, you must ensure that certificates contain valid Infinispan credentials as part of the distinguished name (DN).

5.2. Enabling client certificate authentication

To enable client certificate authentication, you configure Infinispan to use trust stores with either the Validate or Authenticate strategy.

Procedure
  1. Set either Validate or Authenticate as the value for the spec.security.endpointEncryption.clientCert field in your Infinispan CR.

    The default value is None.

  2. Specify the secret that contains the client trust store with the spec.security.endpointEncryption.clientCertSecretName field.

    By default Infinispan Operator expects a trust store secret named <cluster-name>-client-cert-secret.

    The secret must be unique to each Infinispan CR instance in the Kubernetes cluster. When you delete the Infinispan CR, Kubernetes also automatically deletes the associated secret.

    spec:
      security:
        endpointEncryption:
            type: Service
            certSecretName: tls-secret
            clientCert: Validate
            clientCertSecretName: example-infinispan-client-cert-secret
  3. Apply the changes.

Next steps

Provide Infinispan Operator with a trust store that contains all client certificates. Alternatively you can provide certificates in PEM format and let Infinispan generate a client trust store.

5.3. Providing client truststores

If you have a trust store that contains the required certificates you can make it available to Infinispan Operator.

Infinispan supports trust stores in PKCS12 format only.

Procedure
  1. Specify the name of the secret that contains the client trust store as the value of the metadata.name field.

    The name must match the value of the spec.security.endpointEncryption.clientCertSecretName field.

  2. Provide the password for the trust store with the stringData.truststore-password field.

  3. Specify the trust store with the data.truststore.p12 field.

    apiVersion: v1
    kind: Secret
    metadata:
      name: example-infinispan-client-cert-secret
    type: Opaque
    stringData:
        truststore-password: changme
    data:
        truststore.p12:  "<base64_encoded_PKCS12_trust_store>"
  4. Apply the changes.

5.4. Providing client certificates

Infinispan Operator can generate a trust store from certificates in PEM format.

Procedure
  1. Specify the name of the secret that contains the client trust store as the value of the metadata.name field.

    The name must match the value of the spec.security.endpointEncryption.clientCertSecretName field.

  2. Specify the signing certificate, or CA certificate bundle, as the value of the data.trust.ca field.

  3. If you use the Authenticate strategy to verify client identities, add the certificate for each client that can connect to Infinispan endpoints with the data.trust.cert.<name> field.

    Infinispan Operator uses the <name> value as the alias for the certificate when it generates the trust store.

  4. Optionally provide a password for the trust store with the stringData.truststore-password field.

    If you do not provide one, Infinispan Operator sets "password" as the trust store password.

    apiVersion: v1
    kind: Secret
    metadata:
      name: example-infinispan-client-cert-secret
    type: Opaque
    stringData:
        truststore-password: changme
    data:
        trust.ca: "<base64_encoded_CA_certificate>"
        trust.cert.client1: "<base64_encoded_client_certificate>"
        trust.cert.client2: "<base64_encoded_client_certificate>"
  5. Apply the changes.

6. Configuring encryption

Encrypt connections between clients and Infinispan pods with Red Hat OpenShift service certificates or custom TLS certificates.

6.1. Encryption with Red Hat OpenShift service certificates

Infinispan Operator automatically generates TLS certificates that are signed by the Red Hat OpenShift service CA. Infinispan Operator then stores the certificates and keys in a secret so you can retrieve them and use with remote clients.

If the Red Hat OpenShift service CA is available, Infinispan Operator adds the following spec.security.endpointEncryption configuration to the Infinispan CR:

spec:
  security:
    endpointEncryption:
      type: Service
      certServiceName: service.beta.openshift.io
      certSecretName: example-infinispan-cert-secret
Field Description

spec.security.endpointEncryption.certServiceName

Specifies the service that provides TLS certificates.

spec.security.endpointEncryption.certSecretName

Specifies a secret with a service certificate and key in PEM format. Defaults to <cluster_name>-cert-secret.

Service certificates use the internal DNS name of the Infinispan cluster as the common name (CN), for example:

Subject: CN = example-infinispan.mynamespace.svc

For this reason, service certificates can be fully trusted only inside OpenShift. If you want to encrypt connections with clients running outside OpenShift, you should use custom TLS certificates.

Service certificates are valid for one year and are automatically replaced before they expire.

6.2. Retrieving TLS certificates

Get TLS certificates from encryption secrets to create client trust stores.

Procedure
  • Retrieve tls.crt from encryption secrets as follows:

    $ kubectl get secret example-infinispan-cert-secret \
    -o jsonpath='{.data.tls\.crt}' | base64 --decode > tls.crt

6.3. Disabling encryption

You can disable encryption so clients do not need TLS certificates to establish connections with Infinispan.

Do not disable encryption if endpoints are accessible from outside the Kubernetes cluster via spec.expose.type. You should disable encryption for development environments only.

Procedure
  1. Set None as the value for the spec.security.endpointEncryption.type field in your Infinispan CR.

    spec:
      security:
        endpointEncryption:
          type: None
  2. Apply the changes.

6.4. Using custom TLS certificates

Use custom PKCS12 keystore or TLS certificate/key pairs to encrypt connections between clients and Infinispan clusters.

Prerequisites
  • Create either a keystore or certificate secret.

    The secret must be unique to each Infinispan CR instance in the Kubernetes cluster. When you delete the Infinispan CR, Kubernetes also automatically deletes the associated secret.

Procedure
  1. Add the encryption secret to your OpenShift namespace, for example:

    $ kubectl apply -f tls_secret.yaml
  2. Specify the encryption secret with the spec.security.endpointEncryption.certSecretName field in your Infinispan CR.

    spec:
      security:
        endpointEncryption:
          type: Secret
          certSecretName: tls-secret
  3. Apply the changes.

6.4.1. Custom encryption secrets

This topic describes resources for custom encryption secrets.

Keystore secrets
apiVersion: v1
kind: Secret
metadata:
  name: tls-secret
type: Opaque
stringData:
  alias: server
  password: changeme
data:
  keystore.p12:  "MIIKDgIBAzCCCdQGCSqGSIb3DQEHA..."
Field Description

stringData.alias

Specifies an alias for the keystore.

stringData.password

Specifies the keystore password.

data.keystore.p12

Adds a base64-encoded keystore.

Certificate secrets
apiVersion: v1
kind: Secret
metadata:
  name: tls-secret
type: Opaque
data:
  tls.key:  "LS0tLS1CRUdJTiBQUk ..."
  tls.crt: "LS0tLS1CRUdJTiBDRVl ..."
Field Description

data.tls.key

Adds a base64-encoded TLS key.

data.tls.crt

Adds a base64-encoded TLS certificate.

7. Configuring user roles and permissions

Secure access to Infinispan services by configuring role-based access control (RBAC) for users. This requires you to assign roles to users so that they have permission to access caches and Infinispan resources.

7.1. Enabling security authorization

By default authorization is disabled to ensure backwards compatibility with Infinispan CR instances. Complete the following procedure to enable authorization and use role-based access control (RBAC) for Infinispan users.

Procedure
  1. Set true as the value for the spec.security.authorization.enabled field in your Infinispan CR.

    spec:
      security:
        authorization:
          enabled: true
  2. Apply the changes.

7.2. User roles and permissions

Infinispan Operator provides a set of default roles that are associated with different permissions.

Table 3. Default roles and permissions
Role Permissions Description

admin

ALL

Superuser with all permissions including control of the Cache Manager lifecycle.

deployer

ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR, CREATE

Can create and delete Infinispan resources in addition to application permissions.

application

ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR

Has read and write access to Infinispan resources in addition to observer permissions. Can also listen to events and execute server tasks and scripts.

observer

ALL_READ, MONITOR

Has read access to Infinispan resources in addition to monitor permissions.

monitor

MONITOR

Can view statistics for Infinispan clusters.

Infinispan Operator credentials

Infinispan Operator generates credentials that it uses to authenticate with Infinispan clusters to perform internal operations. By default Infinispan Operator credentials are automatically assigned the admin role when you enable security authorization.

Additional resources

7.3. Assigning roles and permissions to users

Assign users with roles that control whether users are authorized to access Infinispan cluster resources. Roles can have different permission levels, from read-only to unrestricted access.

Users gain authorization implicitly. For example, "admin" gets admin permissions automatically. A user named "deployer" has the deployer role automatically, and so on.

Procedure
  1. Create an identities.yaml file that assigns roles to users.

    credentials:
      - username: admin
        password: changeme
      - username: my-user-1
        password: changeme
        roles:
          - admin
      - username: my-user-2
        password: changeme
        roles:
          - monitor
  2. Create an authentication secret from identities.yaml.

    If necessary, delete the existing secret first.

    $ kubectl delete secret connect-secret --ignore-not-found
    $ kubectl create secret generic --from-file=identities.yaml connect-secret
  3. Specify the authentication secret with spec.security.endpointSecretName in your Infinispan CR and then apply the changes.

    spec:
      security:
        endpointSecretName: connect-secret

7.4. Adding custom roles and permissions

You can define custom roles with different combinations of permissions.

Procedure
  1. Open your Infinispan CR for editing.

  2. Specify custom roles and their associated permissions with the spec.security.authorization.roles field.

    spec:
      security:
        authorization:
          enabled: true
          roles:
            - name: my-role-1
              permissions:
                - ALL
            - name: my-role-2
              permissions:
                - READ
                - WRITE
  3. Apply the changes.

8. Configuring network access to Infinispan

Expose Infinispan clusters so you can access Infinispan Console, the Infinispan command line interface (CLI), REST API, and Hot Rod endpoint.

8.1. Getting the service for internal connections

By default, Infinispan Operator creates a service that provides access to Infinispan clusters from clients running on Kubernetes.

This internal service has the same name as your Infinispan cluster, for example:

metadata:
  name: example-infinispan
Procedure
  • Check that the internal service is available as follows:

    $ kubectl get services
    
    NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)
    example-infinispan ClusterIP   192.0.2.0        <none>        11222/TCP

8.2. Exposing Infinispan through load balancers

Use a load balancer service to make Infinispan clusters available to clients running outside Kubernetes.

To access Infinispan with unencrypted Hot Rod client connections you must use a load balancer service.

Procedure
  1. Include spec.expose in your Infinispan CR.

  2. Specify LoadBalancer as the service type with the spec.expose.type field.

  3. Optionally specify the network port where the service is exposed with the spec.expose.port field. The default port is 7900.

    spec:
      expose:
        type: LoadBalancer
        port: 65535
  4. Apply the changes.

  5. Verify that the -external service is available.

    $ kubectl get services | grep external
    
    NAME                         TYPE            CLUSTER-IP    EXTERNAL-IP   PORT(S)
    example-infinispan-external  LoadBalancer    192.0.2.24    hostname.com  11222/TCP

8.3. Exposing Infinispan through node ports

Use a node port service to expose Infinispan clusters on the network.

Procedure
  1. Include spec.expose in your Infinispan CR.

  2. Specify NodePort as the service type with the spec.expose.type field.

  3. Configure the port where Infinispan is exposed with the spec.expose.nodePort field.

    spec:
      expose:
        type: NodePort
        nodePort: 30000
  4. Apply the changes.

  5. Verify that the -external service is available.

    $ kubectl get services | grep external
    
    NAME                         TYPE            CLUSTER-IP       EXTERNAL-IP   PORT(S)
    example-infinispan-external  NodePort        192.0.2.24       <none>        11222:30000/TCP

8.4. Exposing Infinispan through routes

Use a Kubernetes Ingress or an OpenShift Route with passthrough encryption to make Infinispan clusters available on the network.

Procedure
  1. Include spec.expose in your Infinispan CR.

  2. Specify Route as the service type with the spec.expose.type field.

  3. Optionally add a hostname with the spec.expose.host field.

    spec:
      expose:
        type: Route
        host: www.example.org
  4. Apply the changes.

  5. Verify that the route is available.

    $ kubectl get ingress
    
    NAME                 CLASS    HOSTS   ADDRESS   PORTS   AGE
    example-infinispan   <none>   *                 443     73s
Route ports

When you create a route, it exposes a port on the network that accepts client connections and redirects traffic to Infinispan services that listen on port 11222.

The port where the route is available depends on whether you use encryption or not.

Port Description

80

Encryption is disabled.

443

Encryption is enabled.

8.5. Network services

Reference information for network services that Infinispan Operator creates and manages.

8.5.1. Internal service

  • Allow Infinispan pods to discover each other and form clusters.

  • Provide access to Infinispan endpoints from clients in the same Kubernetes namespace.

Service Port Protocol Description

<cluster_name>

11222

TCP

Internal access to Infinispan endpoints

<cluster_name>-ping

8888

TCP

Cluster discovery

8.5.2. External service

Provides access to Infinispan endpoints from clients outside Kubernetes or in different namespaces.

You must create the external service with Infinispan Operator. It is not available by default.

Service Port Protocol Description

<cluster_name>-external

11222

TCP

External access to Infinispan endpoints.

8.5.3. Cross-site service

Allows Infinispan to back up data between clusters in different locations.

Service Port Protocol Description

<cluster_name>-site

7900

TCP

JGroups RELAY2 channel for cross-site communication.

9. Monitoring Infinispan services

Infinispan exposes metrics that can be used by Prometheus and Grafana for monitoring and visualizing the cluster state.

This documentation explains how to set up monitoring on OpenShift Container Platform. If you’re working with community Prometheus deployments, you might find these instructions useful as a general guide. However you should refer to the Prometheus documentation for installation and usage instructions.

See the Prometheus Operator documentation.

9.1. Creating a Prometheus service monitor

Infinispan Operator automatically creates a Prometheus ServiceMonitor that scrapes metrics from your Infinispan cluster.

Procedure

Enable monitoring for user-defined projects on OpenShift Container Platform.

When the Operator detects an Infinispan CR with the monitoring annotation set to true, which is the default, Infinispan Operator does the following:

  • Creates a ServiceMonitor named <cluster_name>-monitor.

  • Adds the infinispan.org/monitoring: 'true' annotation to your Infinispan CR metadata, if the value is not already explicitly set:

    apiVersion: infinispan.org/v1
    kind: Infinispan
    metadata:
      name: example-infinispan
      annotations:
        infinispan.org/monitoring: 'true'

To authenticate with Infinispan, Prometheus uses the operator credentials.

Verification

You can check that Prometheus is scraping Infinispan metrics as follows:

  1. In the OpenShift Web Console, select the </> Developer perspective and then select Monitoring.

  2. Open the Dashboard tab for the namespace where your Infinispan cluster runs.

  3. Open the Metrics tab and confirm that you can query Infinispan metrics such as:

    vendor_cache_manager_default_cluster_size

9.1.1. Disabling the Prometheus service monitor

You can disable the ServiceMonitor if you do not want Prometheus to scrape metrics for your Infinispan cluster.

Procedure
  1. Set 'false' as the value for the infinispan.org/monitoring annotation in your Infinispan CR.

    apiVersion: infinispan.org/v1
    kind: Infinispan
    metadata:
      name: example-infinispan
      annotations:
        infinispan.org/monitoring: 'false'
  2. Apply the changes.

9.2. Creating Grafana data sources

Create a GrafanaDatasource CR so you can visualize Infinispan metrics in Grafana dashboards.

Prerequisites
  • Have an oc client.

  • Have cluster-admin access to OpenShift Container Platform.

  • Enable monitoring for user-defined projects on OpenShift Container Platform.

  • Install the Grafana Operator from the alpha channel and create a Grafana CR.

Procedure
  1. Create a ServiceAccount that lets Grafana read Infinispan metrics from Prometheus.

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: infinispan-monitoring
    1. Apply the ServiceAccount.

      $ oc apply -f service-account.yaml
    2. Grant cluster-monitoring-view permissions to the ServiceAccount.

      $ oc adm policy add-cluster-role-to-user cluster-monitoring-view -z infinispan-monitoring
  2. Create a Grafana data source.

    1. Retrieve the token for the ServiceAccount.

      $ oc serviceaccounts get-token infinispan-monitoring
      
      eyJhbGciOiJSUzI1NiIsImtpZCI6Imc4O...
    2. Define a GrafanaDataSource that includes the token in the spec.datasources.secureJsonData.httpHeaderValue1 field, as in the following example:

      apiVersion: integreatly.org/v1alpha1
      kind: GrafanaDataSource
      metadata:
        name: grafanadatasource
      spec:
        name: datasource.yaml
        datasources:
          - access: proxy
            editable: true
            isDefault: true
            jsonData:
              httpHeaderName1: Authorization
              timeInterval: 5s
              tlsSkipVerify: true
            name: Prometheus
            secureJsonData:
              httpHeaderValue1: >-
                Bearer
                eyJhbGciOiJSUzI1NiIsImtpZCI6Imc4O...
            type: prometheus
            url: 'https://thanos-querier.openshift-monitoring.svc.cluster.local:9091'
  3. Apply the GrafanaDataSource.

    $ oc apply -f grafana-datasource.yaml
Next steps

Enable Grafana dashboards with the Infinispan Operator configuration properties.

9.3. Configuring Infinispan dashboards

Infinispan Operator provides global configuration properties that let you configure Grafana dashboards for Infinispan clusters.

You can modify global configuration properties while Infinispan Operator is running.

Prerequisites
  • Infinispan Operator must watch the namespace where the Grafana Operator is running.

Procedure
  1. Create a ConfigMap named infinispan-operator-config in the Infinispan Operator namespace.

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: infinispan-operator-config
    data:
      grafana.dashboard.namespace: example-infinispan
      grafana.dashboard.name: infinispan
      grafana.dashboard.monitoring.key: middleware
  2. Specify the namespace of your Infinispan cluster with the data.grafana.dashboard.namespace property.

    Deleting the value for this property removes the dashboard. Changing the value moves the dashboard to that namespace.

  3. Specify a name for the dashboard with the data.grafana.dashboard.name property.

  4. If necessary, specify a monitoring key with the data.grafana.dashboard.monitoring.key property.

  5. Create infinispan-operator-config or update the configuration.

    $ oc apply -f infinispan-operator-config.yaml
  6. Open the Grafana UI, which is available at:

    $ oc get routes grafana-route -o jsonpath=https://"{.spec.host}"

10. Setting up cross-site replication

Ensure service availability with Infinispan Operator by configuring cross-site replication to back up data between Infinispan clusters.

10.1. Using Infinispan Operator to manage cross-site connections

Infinispan Operator in one data center can discover a Infinispan cluster that Infinispan Operator manages in another data center. This discovery allows Infinispan to automatically form cross-site views and create global clusters.

The following illustration provides an example in which Infinispan Operator manages a Infinispan cluster at a data center in New York City, NYC. At another data center in London, LON, Infinispan Operator also manages a Infinispan cluster.

xsite ispn

Infinispan Operator uses the Kubernetes API to establish a secure connection between the OpenShift Container Platform clusters in NYC and LON. Infinispan Operator then creates a cross-site replication service so Infinispan clusters can back up data across locations.

Infinispan Operator in each OpenShift cluster must have network access to the remote Kubernetes API.

When you configure automatic connections, Infinispan clusters do not start running until Infinispan Operator discovers all backup locations in the configuration.

Each Infinispan cluster has one site master node that coordinates all backup requests. Infinispan Operator identifies the site master node so that all traffic through the cross-site replication service goes to the site master.

If the current site master node goes offline then a new node becomes site master. Infinispan Operator automatically finds the new site master node and updates the cross-site replication service to forward backup requests to it.

10.1.1. Kubernetes clusters

Apply cluster roles and then create site access secrets if you run Infinispan Operator on vanilla Kubernetes or minikube.

Applying cluster roles for cross-site replication

During OLM installation, Infinispan Operator sets up cluster roles required for cross-site replication. If you install Infinispan Operator manually, you must complete this procedure to set up those cluster roles.

Procedure
  • Install clusterrole.yaml and clusterrole_binding.yaml as follows:

$ kubectl apply -f deploy/clusterrole.yaml
$ kubectl apply -f deploy/clusterrole_binding.yaml
Creating Kubernetes site access secrets

If you run Infinispan Operator in any Kubernetes deployment (Minikube, Kind, etc.), you should create secrets that contain the files that allow Kubernetes clusters to authenticate with each other.

Do one of the following:

  • Retrieve service account tokens from each site and then add them to secrets on each backup location, for example:

    $ kubectl create serviceaccount site-a -n ns-site-a
    $ kubectl create clusterrole xsite-cluster-role --verb=get,list,watch --resource=nodes,services
    $ kubectl create clusterrolebinding xsite-cluster-role-binding --clusterrole=xsite-cluster-role --serviceaccount=ns-site-a:site-a
    $ TOKENNAME=kubectl get serviceaccount/site-a -o jsonpath='{.secrets[0].name}' -n ns-site-a
    $ TOKEN=kubectl get secret $TOKENNAME -o jsonpath='{.data.token}' -n ns-site-a | base64 --decode
    $ kubectl create secret generic site-a-secret -n ns-site-a --from-literal=token=$TOKEN
  • Create secrets on each site that contain ca.crt, client.crt, and client.key from your Kubernetes installation.

    For example, for Minikube do the following on LON:

    $ kubectl create secret generic site-a-secret \
        --from-file=certificate-authority=/opt/minikube/.minikube/ca.crt \
        --from-file=client-certificate=/opt/minikube/.minikube/client.crt \
        --from-file=client-key=/opt/minikube/.minikube/client.key

10.1.2. OpenShift clusters

Create and exchange service account tokens if you run Infinispan Operator on OpenShift.

Creating service account tokens

Generate service account tokens on each OpenShift cluster that acts as a backup location. Clusters use these tokens to authenticate with each other so Infinispan Operator can create a cross-site replication service.

Procedure
  1. Log in to an OpenShift cluster.

  2. Create a service account.

    For example, create a service account at LON:

    $ oc create sa lon
    serviceaccount/lon created
  3. Add the view role to the service account with the following command:

    $ oc policy add-role-to-user view system:serviceaccount:<namespace>:lon
  4. If you use a node port service to expose Infinispan clusters on the network, you must also add the cluster-reader role to the service account:

    $ oc adm policy add-cluster-role-to-user cluster-reader -z <service-account-name> -n <namespace>
  5. Repeat the preceding steps on your other OpenShift clusters.

Exchanging service account tokens

After you create service account tokens on your OpenShift clusters, you add them to secrets on each backup location. For example, at LON you add the service account token for NYC. At NYC you add the service account token for LON.

Prerequisites
  • Get tokens from each service account.

    Use the following command or get the token from the OpenShift Web Console:

    $ oc sa get-token lon
    
    eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9...
Procedure
  1. Log in to an OpenShift cluster.

  2. Add the service account token for a backup location with the following command:

    $ oc create secret generic <token-name> --from-literal=token=<token>

    For example, log in to the OpenShift cluster at NYC and create a lon-token secret as follows:

    $ oc create secret generic lon-token --from-literal=token=eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9...
  3. Repeat the preceding steps on your other OpenShift clusters.

10.1.3. Configuring Infinispan Operator to handle cross-site connections

Configure Infinispan Operator to establish cross-site views with Infinispan clusters.

Prerequisites
  • Create secrets that contain service account tokens for each backup location.

Procedure
  1. Create an Infinispan CR for each Infinispan cluster.

  2. Specify the name of the local site with spec.service.sites.local.name.

  3. Set the value of the spec.service.sites.local.expose.type field to either NodePort or LoadBalancer.

  4. Optionally configure ports with the following fields:

    • spec.service.sites.local.expose.nodePort if you use NodePort.

    • spec.service.sites.local.expose.port if you use LoadBalancer.

  5. Provide the name, URL, and secret for each Infinispan cluster that acts as a backup location with spec.service.sites.locations.

  6. If Infinispan cluster names or namespaces at the remote site do not match the local site, specify those values with the clusterName and namespace fields.

    The following are example Infinispan CR definitions for LON and NYC:

    • LON

      apiVersion: infinispan.org/v1
      kind: Infinispan
      metadata:
        name: example-infinispan
      spec:
        replicas: 3
        service:
          type: DataGrid
          sites:
            local:
              name: LON
              expose:
                type: LoadBalancer
                port: 65535
            locations:
              - name: NYC
                clusterName: <nyc_cluster_name>
                namespace: <nyc_cluster_namespace>
                url: openshift://api.rhdg-nyc.openshift-aws.myhost.com:6443
                secretName: nyc-token
        logging:
          categories:
            org.jgroups.protocols.TCP: error
            org.jgroups.protocols.relay.RELAY2: error
    • NYC

      apiVersion: infinispan.org/v1
      kind: Infinispan
      metadata:
        name: nyc-cluster
      spec:
        replicas: 2
        service:
          type: DataGrid
          sites:
            local:
              name: NYC
              expose:
                type: LoadBalancer
                port: 65535
            locations:
              - name: LON
                clusterName: example-infinispan
                namespace: ispn-namespace
                url: openshift://api.rhdg-lon.openshift-aws.myhost.com:6443
                secretName: lon-token
        logging:
          categories:
            org.jgroups.protocols.TCP: error
            org.jgroups.protocols.relay.RELAY2: error

      Be sure to adjust logging categories in your Infinispan CR to decrease log levels for JGroups TCP and RELAY2 protocols. This prevents a large number of log files from uses container storage.

      spec:
        logging:
          categories:
            org.jgroups.protocols.TCP: error
            org.jgroups.protocols.relay.RELAY2: error
  7. Configure your Infinispan CRs with any other Data Grid Service resources and then apply the changes.

  8. Verify that Infinispan clusters form a cross-site view.

    1. Retrieve the Infinispan CR.

      $ kubectl get infinispan -o yaml
    2. Check for the type: CrossSiteViewFormed condition.

Next steps

If your clusters have formed a cross-site view, you can start adding backup locations to caches.

10.1.4. Resources for managed cross-site connections

This topic describes resources for cross-site connections that Infinispan Operator manages.

spec:
  service:
    type: DataGrid
    sites:
      local:
        name: LON
        expose:
          type: LoadBalancer
      locations:
      - name: NYC
        clusterName: <nyc_cluster_name>
        namespace: <nyc_cluster_namespace>
        url: openshift://api.site-b.devcluster.openshift.com:6443
        secretName: nyc-token
Field Description

service.type: DataGrid

Infinispan supports cross-site replication with Data Grid Service clusters only.

service.sites.local.name

Names the local site where a Infinispan cluster runs.

service.sites.local.expose.type

Specifies the network service for cross-site replication. Infinispan clusters use this service to communicate and perform backup operations. You can set the value to NodePort or LoadBalancer.

service.sites.local.expose.nodePort

Specifies a static port within the default range of 30000 to 32767 if you expose Infinispan through a NodePort service. If you do not specify a port, the platform selects an available one.

service.sites.local.expose.port

Specifies the network port for the service if you expose Infinispan through a LoadBalancer. The default port is 7900.

service.sites.locations

Provides connection information for all backup locations.

service.sites.locations.name

Specifies a backup location that matches .spec.service.sites.local.name.

service.sites.locations.url

Specifies a backup location.

Use kubernetes:// if the backup location is a Kubernetes instance.

Use openshift:// if the backup location is an OpenShift cluster. You should specify the URL of the Kubernetes API.

Use infinispan+xsite:// if the backup location has a static hostname and port.

service.sites.locations.secretName

Specifies the access secret for a site. This secret contains different authentication objects, depending on your Kubernetes environment.

service.sites.locations.clusterName

Specifies the cluster name at the backup location if it is different to the cluster name at the local site.

service.sites.locations.namespace

Specifies the namespace of the Infinispan cluster at the backup location if it does not match the namespace at the local site.

10.2. Manually connecting Infinispan clusters

You can specify static network connection details to perform cross-site replication with Infinispan clusters running outside Kubernetes. Manual cross-site connections are necessary in any scenario where access to the Kubernetes API is not available outside the Kubernetes cluster where Infinispan runs.

You can use both automatic and manual connections for Infinispan clusters in the same Infinispan CR. However, you must ensure that Infinispan clusters establish connections in the same way at each site.

Prerequisites

Manually connecting Infinispan clusters to form cross-site views requires predictable network locations for Infinispan services.

You need to know the network locations before they are created, which requires you to:

  • Have the host names and ports for each Infinispan cluster that you plan to configure as a backup location.

  • Have the host name of the <cluster-name>-site service for any remote Infinispan cluster that is running on Kubernetes.
    You must use the <cluster-name>-site service to form a cross-site view between a cluster that Infinispan Operator manages and any other cluster.

Procedure
  1. Create an Infinispan CR for each Infinispan cluster.

  2. Specify the name of the local site with spec.service.sites.local.name.

  3. Set the value of the spec.service.sites.local.expose.type field to either NodePort or LoadBalancer.

  4. Optionally configure ports with the following fields:

    • spec.service.sites.local.expose.nodePort if you use NodePort.

    • spec.service.sites.local.expose.port if you use LoadBalancer.

  5. Provide the name and static URL for each Infinispan cluster that acts as a backup location with spec.service.sites.locations, for example:

    • LON

      apiVersion: infinispan.org/v1
      kind: Infinispan
      metadata:
        name: example-infinispan
      spec:
        replicas: 3
        service:
          type: DataGrid
          sites:
            local:
              name: LON
              expose:
                type: LoadBalancer
                port: 65535
            locations:
              - name: NYC
                url: infinispan+xsite://infinispan-nyc.myhost.com:7900
        logging:
          categories:
            org.jgroups.protocols.TCP: error
            org.jgroups.protocols.relay.RELAY2: error
    • NYC

      apiVersion: infinispan.org/v1
      kind: Infinispan
      metadata:
        name: example-infinispan
      spec:
        replicas: 2
        service:
          type: DataGrid
          sites:
            local:
              name: NYC
              expose:
                type: LoadBalancer
                port: 65535
            locations:
              - name: LON
                url: infinispan+xsite://infinispan-lon.myhost.com
        logging:
          categories:
            org.jgroups.protocols.TCP: error
            org.jgroups.protocols.relay.RELAY2: error

      Be sure to adjust logging categories in your Infinispan CR to decrease log levels for JGroups TCP and RELAY2 protocols. This prevents a large number of log files from uses container storage.

      spec:
        logging:
          categories:
            org.jgroups.protocols.TCP: error
            org.jgroups.protocols.relay.RELAY2: error
  6. Configure your Infinispan CRs with any other Data Grid Service resources and then apply the changes.

  7. Verify that Infinispan clusters form a cross-site view.

    1. Retrieve the Infinispan CR.

      $ kubectl get infinispan -o yaml
    2. Check for the type: CrossSiteViewFormed condition.

Next steps

If your clusters have formed a cross-site view, you can start adding backup locations to caches.

10.2.1. Resources for manual cross-site connections

This topic describes resources for cross-site connections that you maintain manually.

spec:
  service:
    type: DataGrid
    sites:
      local:
        name: LON
        expose:
          type: LoadBalancer
          port: 65535
      locations:
      - name: NYC
        url: infinispan+xsite://infinispan-nyc.myhost.com:7900
Field Description

service.type: DataGrid

Infinispan supports cross-site replication with Data Grid Service clusters only.

service.sites.local.name

Names the local site where a Infinispan cluster runs.

service.sites.local.expose.type

Specifies the network service for cross-site replication. Infinispan clusters use this service to communicate and perform backup operations. You can set the value to NodePort or LoadBalancer.

service.sites.local.expose.nodePort

Specifies a static port within the default range of 30000 to 32767 if you expose Infinispan through a NodePort service. If you do not specify a port, the platform selects an available one.

service.sites.local.expose.port

Specifies the network port for the service if you expose Infinispan through a LoadBalancer. The default port is 7900.

service.sites.locations

Provides connection information for all backup locations.

service.sites.locations.name

Specifies a backup location that matches .spec.service.sites.local.name.

service.sites.locations.url

Specifies the static URL for the backup location in the format of infinispan+xsite://<hostname>:<port>. The default port is 7900.

10.3. Configuring sites in the same Kubernetes cluster

For evaluation and demonstration purposes, you can configure Infinispan to back up between nodes in the same Kubernetes cluster.

Procedure
  1. Create an Infinispan CR for each Infinispan cluster.

  2. Specify the name of the local site with spec.service.sites.local.name.

  3. Set ClusterIP as the value of the spec.service.sites.local.expose.type field.

  4. Provide the name of the Infinispan cluster that acts as a backup location with spec.service.sites.locations.clusterName.

  5. If both Infinispan clusters have the same name, specify the namespace of the backup location with spec.service.sites.locations.namespace.

    apiVersion: infinispan.org/v1
    kind: Infinispan
    metadata:
      name: example-clustera
    spec:
      replicas: 1
      expose:
        type: LoadBalancer
      service:
        type: DataGrid
        sites:
          local:
            name: SiteA
            expose:
              type: ClusterIP
          locations:
            - name: SiteB
              clusterName: example-clusterb
              namespace: cluster-namespace
  6. Configure your Infinispan CRs with any other Data Grid Service resources and then apply the changes.

  7. Verify that Infinispan clusters form a cross-site view.

    1. Retrieve the Infinispan CR.

      $ kubectl get infinispan -o yaml
    2. Check for the type: CrossSiteViewFormed condition.

11. Guaranteeing availability with anti-affinity

Kubernetes includes anti-affinity capabilities that protect workloads from single points of failure.

11.1. Anti-affinity strategies

Each Infinispan node in a cluster runs in a pod that runs on an Kubernetes node in a cluster. Each Red Hat OpenShift node runs on a physical host system. Anti-affinity works by distributing Infinispan nodes across Kubernetes nodes, ensuring that your Infinispan clusters remain available even if hardware failures occur.

Infinispan Operator offers two anti-affinity strategies:

kubernetes.io/hostname

Infinispan replica pods are scheduled on different Kubernetes nodes.

topology.kubernetes.io/zone

Infinispan replica pods are scheduled across multiple zones.

Fault tolerance

Anti-affinity strategies guarantee cluster availability in different ways.

The equations in the following section apply only if the number of Kubernetes nodes or zones is greater than the number of Infinispan nodes.

Scheduling pods on different Kubernetes nodes

Provides tolerance of x node failures for the following types of cache:

  • Replicated: x = spec.replicas - 1

  • Distributed: x = num_owners - 1

Scheduling pods across multiple zones

Provides tolerance of x zone failures when x zones exist for the following types of cache:

  • Replicated: x = spec.replicas - 1

  • Distributed: x = num_owners - 1

spec.replicas

Defines the number of pods in each Infinispan cluster.

num_owners

Is the cache configuration attribute that defines the number of replicas for each entry in the cache.

11.2. Configuring anti-affinity

Specify where Kubernetes schedules pods for your Infinispan clusters to ensure availability.

Procedure
  1. Add the spec.affinity block to your Infinispan CR.

  2. Configure anti-affinity strategies as necessary.

  3. Apply your Infinispan CR.

11.2.1. Anti-affinity strategy configurations

Configure anti-affinity strategies in your Infinispan CR to control where Kubernetes schedules Infinispan replica pods.

Topology keys Description

topologyKey: "topology.kubernetes.io/zone"

Schedules Infinispan replica pods across multiple zones.

topologyKey: "kubernetes.io/hostname"

Schedules Infinispan replica pods on different Kubernetes nodes.

Schedule pods on different Kubernetes nodes

The following is the anti-affinity strategy that Infinispan Operator uses if you do not configure the spec.affinity field in your Infinispan CR:

spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app: infinispan-pod
              clusterName: <cluster_name>
              infinispan_cr: <cluster_name>
          topologyKey: "kubernetes.io/hostname"
Requiring different nodes

In the following example, Kubernetes does not schedule Infinispan pods if different nodes are not available:

spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            app: infinispan-pod
            clusterName: <cluster_name>
            infinispan_cr: <cluster_name>
        topologyKey: "topology.kubernetes.io/hostname"

To ensure that you can schedule Infinispan replica pods on different Kubernetes nodes, the number of Kubernetes nodes available must be greater than the value of spec.replicas.

Schedule pods across multiple Kubernetes zones

The following example prefers multiple zones when scheduling pods but schedules Infinispan replica pods on different Kubernetes nodes if it is not possible to schedule across zones:

spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app: infinispan-pod
              clusterName: <cluster_name>
              infinispan_cr: <cluster_name>
          topologyKey: "topology.kubernetes.io/zone"
      - weight: 90
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app: infinispan-pod
              clusterName: <cluster_name>
              infinispan_cr: <cluster_name>
          topologyKey: "kubernetes.io/hostname"
Requiring multiple zones

The following example uses the zone strategy only when scheduling Infinispan replica pods:

spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            app: infinispan-pod
            clusterName: <cluster_name>
            infinispan_cr: <cluster_name>
        topologyKey: "topology.kubernetes.io/zone"

12. Creating caches with Infinispan Operator

Use Cache CRs to add cache configuration with Infinispan Operator and control how Infinispan stores your data.

The Cache CR is not yet functionally complete. The capability to create caches with Infinispan Operator is still under development and not recommended for production environments or critical workloads.

12.1. Infinispan caches

Cache configuration defines the characteristics and features of the data store and must be valid with the Infinispan schema. Infinispan recommends creating standalone files in XML or JSON format that define your cache configuration. You should separate Infinispan configuration from application code for easier validation and to avoid the situation where you need to maintain XML snippets in Java or some other client language.

To create caches with Infinispan clusters running on Kubernetes, you should:

  • Use Cache CR as the mechanism for creating caches through the Kubernetes front end.

  • Use Batch CR to create multiple caches at a time from standalone configuration files.

  • Access Infinispan Console and create caches in XML or JSON format.

You can use Hot Rod or HTTP clients but Infinispan recommends Cache CR or Batch CR unless your specific use case requires programmatic remote cache creation.

12.2. Cache CRs

Find out details for configuring Infinispan caches with Cache CR.

When using Cache CRs, the following rules apply:

  • Cache CRs apply to Data Grid Service pods only.

  • You can create a single cache for each Cache CR.

  • If your Cache CR contains both a template and an XML configuration, Infinispan Operator uses the template.

  • If you edit caches in the OpenShift Web Console, the changes are reflected through the user interface but do not take effect on the Infinispan cluster. You cannot edit caches. To change cache configuration, you must first delete the cache through the console or CLI and then re-create the cache.

  • Deleting Cache CRs in the OpenShift Web Console does not remove caches from Infinispan clusters. You must delete caches through the console or CLI.

In previous versions, you need to add credentials to a secret so that Infinispan Operator can access your cluster when creating caches.

That is no longer necessary. Infinispan Operator uses the operator user and corresponding password to perform cache operations.

12.3. Creating caches from XML

Complete the following steps to create caches on Data Grid Service clusters using valid infinispan.xml configuration.

Procedure
  1. Create a Cache CR that contains an XML cache configuration.

    1. Specify a name for the Cache CR with the metadata.name field.

    2. Specify the target Infinispan cluster with the spec.clusterName field.

    3. Name your cache with the spec.name field.

      The name attribute in the XML configuration is ignored. Only the spec.name field applies to the resulting cache.

    4. Add an XML cache configuration with the spec.template field.

      apiVersion: infinispan.org/v2alpha1
      kind: Cache
      metadata:
        name: mycachedefinition
      spec:
        clusterName: example-infinispan
        name: mycache
        template: <distributed-cache name="mycache" mode="SYNC"><persistence><file-store/></persistence></distributed-cache>
  2. Apply the Cache CR, for example:

    $ kubectl apply -f mycache.yaml
    cache.infinispan.org/mycachedefinition created

12.4. Creating caches from templates

Complete the following steps to create caches on Data Grid Service clusters using cache templates.

Prerequisites
  • Identify the cache template you want to use for your cache.
    You can find a list of available templates in Infinispan Console.

Procedure
  1. Create a Cache CR that specifies the name of a template to use.

    1. Specify a name for the Cache CR with the metadata.name field.

    2. Specify the target Infinispan cluster with the spec.clusterName field.

    3. Name your cache with the spec.name field.

    4. Specify a cache template with the spec.template field.

      The following example creates a cache named "mycache" from the org.infinispan.DIST_SYNC cache template:

      apiVersion: infinispan.org/v2alpha1
      kind: Cache
      metadata:
        name: mycachedefinition
      spec:
        clusterName: example-infinispan
        name: mycache
        templateName: org.infinispan.DIST_SYNC
  2. Apply the Cache CR, for example:

    $ kubectl apply -f mycache.yaml
    cache.infinispan.org/mycachedefinition created

12.5. Adding backup locations to caches

When you configure Infinispan clusters to perform cross-site replication, you can add backup locations to your cache configurations.

Procedure
  1. Create cache configurations that name remote sites as backup locations.

    Infinispan replicates data based on cache names. For this reason, site names in your cache configurations must match site names, spec.service.sites.local.name, in your Infinispan CRs.

  2. Configure backup locations to go offline automatically with the take-offline element.

    1. Set the amount of time, in milliseconds, before backup locations go offline with the min-wait attribute.

  3. Define any other valid cache configuration.

  4. Add backup locations to the named cache on all sites in the global cluster.

    For example, if you add LON as a backup for NYC you should add NYC as a backup for LON.

The following configuration examples show backup locations for caches:

  • NYC

    <distributed-cache name="customers">
      <encoding media-type="application/x-protostream"/>
      <backups>
        <backup site="LON" strategy="SYNC">
          <take-offline min-wait="120000"/>
        </backup>
      </backups>
    </distributed-cache>
  • LON

    <replicated-cache name="customers">
      <encoding media-type="application/x-protostream"/>
      <backups>
        <backup site="NYC" strategy="ASYNC" >
          <take-offline min-wait="120000"/>
        </backup>
      </backups>
    </replicated-cache>

12.5.1. Performance considerations with taking backup locations offline

Backup locations can automatically go offline when remote sites become unavailable. This prevents pods from attempting to replicate data to offline backup locations, which can have a performance impact on your cluster because it results in error.

You can configure how long to wait before backup locations go offline. A good rule of thumb is one or two minutes. However, you should test different wait periods and evaluate their performance impacts to determine the correct value for your deployment.

For instance when OpenShift terminates the site master pod, that backup location becomes unavailable for a short period of time until Infinispan Operator elects a new site master. In this case, if the minimum wait time is not long enough then the backup locations go offline. You then need to bring those backup locations online and perform state transfer operations to ensure the data is in sync.

Likewise, if the minimum wait time is too long, node CPU usage increases from failed backup attempts which can lead to performance degradation.

12.6. Adding persistent cache stores

You can add persistent cache stores to Data Grid Service pods to save data to the persistent volume.

Infinispan creates a Single File cache store, .dat file, in the /opt/infinispan/server/data directory.

Procedure
  • Add the <file-store/> element to the persistence configuration in your Infinispan cache, as in the following example:

    <distributed-cache name="persistent-cache" mode="SYNC">
      <encoding media-type="application/x-protostream"/>
      <persistence>
        <file-store/>
      </persistence>
    </distributed-cache>

13. Running batch operations

Infinispan Operator provides a Batch CR that lets you create Infinispan resources in bulk. Batch CR uses the Infinispan command line interface (CLI) in batch mode to carry out sequences of operations.

Modifying a Batch CR instance has no effect. Batch operations are "one-time" events that modify Infinispan resources. To update .spec fields for the CR, or when a batch operation fails, you must create a new instance of the Batch CR.

13.1. Running inline batch operations

Include your batch operations directly in a Batch CR if they do not require separate configuration artifacts.

Procedure
  1. Create a Batch CR.

    1. Specify the name of the Infinispan cluster where you want the batch operations to run as the value of the spec.cluster field.

    2. Add each CLI command to run on a line in the spec.config field.

      apiVersion: infinispan.org/v2alpha1
      kind: Batch
      metadata:
        name: mybatch
      spec:
        cluster: example-infinispan
        config: |
          create cache --template=org.infinispan.DIST_SYNC mycache
          put --cache=mycache hello world
          put --cache=mycache hola mundo
  2. Apply your Batch CR.

    $ kubectl apply -f mybatch.yaml
  3. Check the status.Phase field in the Batch CR to verify the operations completed successfully.

13.2. Creating ConfigMaps for batch operations

Create a ConfigMap so that additional files, such as Infinispan cache configuration, are available for batch operations.

Prerequisites

For demonstration purposes, you should add some configuration artifacts to your host filesystem before you start the procedure:

  • Create a /tmp/mybatch directory where you can add some files.

    $ mkdir -p /tmp/mybatch
  • Create a Infinispan cache configuration.

    $ cat > /tmp/mybatch/mycache.xml<<EOF
    <distributed-cache name="mycache" mode="SYNC">
      <encoding media-type="application/x-protostream"/>
      <memory max-count="1000000" when-full="REMOVE"/>
    </distributed-cache>
    EOF
Procedure
  1. Create a batch file that contains all commands you want to run.

    For example, the following batch file creates a cache named "mycache" and adds two entries to it:

    create cache mycache --file=/etc/batch/mycache.xml
    put --cache=mycache hello world
    put --cache=mycache hola mundo

    The ConfigMap is mounted in Infinispan pods at /etc/batch. You must prepend all --file= directives in your batch operations with that path.

  2. Ensure all configuration artifacts that your batch operations require are in the same directory as the batch file.

    $ ls /tmp/mybatch
    
    batch
    mycache.xml
  3. Create a ConfigMap from the directory.

    $ kubectl create configmap mybatch-config-map --from-file=/tmp/mybatch

13.3. Running batch operations with ConfigMaps

Run batch operations that include configuration artifacts.

Prerequisites
  • Create a ConfigMap that contains any files your batch operations require.

Procedure
  1. Create a Batch CR that specifies the name of a Infinispan cluster as the value of the spec.cluster field.

  2. Set the name of the ConfigMap that contains your batch file and configuration artifacts with the spec.configMap field.

    $ cat > mybatch.yaml<<EOF
    apiVersion: infinispan.org/v2alpha1
    kind: Batch
    metadata:
      name: mybatch
    spec:
      cluster: example-infinispan
      configMap: mybatch-config-map
    EOF
  3. Apply your Batch CR.

    $ kubectl apply -f mybatch.yaml
  4. Check the status.Phase field in the Batch CR to verify the operations completed successfully.

13.4. Batch status messages

Verify and troubleshoot batch operations with the status.Phase field in the Batch CR.

Phase Description

Succeeded

All batch operations have completed successfully.

Initializing

Batch operations are queued and resources are initializing.

Initialized

Batch operations are ready to start.

Running

Batch operations are in progress.

Failed

One or more batch operations were not successful.

Failed operations

Batch operations are not atomic. If a command in a batch script fails, it does not affect the other operations or cause them to rollback.

If your batch operations have any server or syntax errors, you can view log messages in the Batch CR in the status.Reason field.

13.5. Example batch operations

Use these example batch operations as starting points for creating and modifying Infinispan resources with the Batch CR.

You can pass configuration files to Infinispan Operator only via a ConfigMap.

The ConfigMap is mounted in Infinispan pods at /etc/batch so you must prepend all --file= directives with that path.

13.5.1. Caches

  • Create multiple caches from configuration files.

echo "creating caches..."
create cache sessions --file=/etc/batch/infinispan-prod-sessions.xml
create cache tokens --file=/etc/batch/infinispan-prod-tokens.xml
create cache people --file=/etc/batch/infinispan-prod-people.xml
create cache books --file=/etc/batch/infinispan-prod-books.xml
create cache authors --file=/etc/batch/infinispan-prod-authors.xml
echo "list caches in the cluster"
ls caches
  • Create a template from a file and then create caches from the template.

echo "creating caches..."
create cache mytemplate --file=/etc/batch/mycache.xml
create cache sessions --template=mytemplate
create cache tokens --template=mytemplate
echo "list caches in the cluster"
ls caches

13.5.2. Counters

Use the Batch CR to create multiple counters that can increment and decrement to record the count of objects.

You can use counters to generate identifiers, act as rate limiters, or track the number of times a resource is accessed.

echo "creating counters..."
create counter --concurrency-level=1 --initial-value=5 --storage=PERSISTENT --type=weak mycounter1
create counter --initial-value=3 --storage=PERSISTENT --type=strong mycounter2
create counter --initial-value=13 --storage=PERSISTENT --type=strong --upper-bound=10 mycounter3
echo "list counters in the cluster"
ls counters

13.5.3. Protobuf schema

Register Protobuf schema to query values in caches. Protobuf schema (.proto files) provide metadata about custom entities and controls field indexing.

echo "creating schema..."
schema --upload=person.proto person.proto
schema --upload=book.proto book.proto
schema --upload=author.proto book.proto
echo "list Protobuf schema"
ls schemas

13.5.4. Tasks

Upload tasks that implement org.infinispan.tasks.ServerTask or scripts that are compatible with the javax.script scripting API.

echo "creating tasks..."
task upload --file=/etc/batch/myfirstscript.js myfirstscript
task upload --file=/etc/batch/mysecondscript.js mysecondscript
task upload --file=/etc/batch/mythirdscript.js mythirdscript
echo "list tasks"
ls tasks
Additional resources

14. Backing up and restoring Infinispan clusters

Infinispan Operator lets you back up and restore Infinispan cluster state for disaster recovery and to migrate Infinispan resources between clusters.

14.1. Backup and Restore CRs

Backup and Restore CRs save in-memory data at runtime so you can easily recreate Infinispan clusters.

Applying a Backup or Restore CR creates a new pod that joins the Infinispan cluster as a zero-capacity member, which means it does not require cluster rebalancing or state transfer to join.

For backup operations, the pod iterates over cache entries and other resources and creates an archive, a .zip file, in the /opt/infinispan/backups directory on the persistent volume (PV).

Performing backups does not significantly impact performance because the other pods in the Infinispan cluster only need to respond to the backup pod as it iterates over cache entries.

For restore operations, the pod retrieves Infinispan resources from the archive on the PV and applies them to the Infinispan cluster.

When either the backup or restore operation completes, the pod leaves the cluster and is terminated.

Reconciliation

Infinispan Operator does not reconcile Backup and Restore CRs which mean that backup and restore operations are "one-time" events.

Modifying an existing Backup or Restore CR instance does not perform an operation or have any effect. If you want to update .spec fields, you must create a new instance of the Backup or Restore CR.

14.2. Backing up Infinispan clusters

Create a backup file that stores Infinispan cluster state to a persistent volume.

Prerequisites
  • Create an Infinispan CR with spec.service.type: DataGrid.

  • Ensure there are no active client connections to the Infinispan cluster.

    Infinispan backups do not provide snapshot isolation and data modifications are not written to the archive after the cache is backed up.
    To archive the exact state of the cluster, you should always disconnect any clients before you back it up.

Procedure
  1. Name the Backup CR with the metadata.name field.

  2. Specify the Infinispan cluster to backup with the spec.cluster field.

  3. Configure the persistent volume claim (PVC) that adds the backup archive to the persistent volume (PV) with the spec.volume.storage and spec.volume.storage.storageClassName fields.

    apiVersion: infinispan.org/v2alpha1
    kind: Backup
    metadata:
      name: my-backup
    spec:
      cluster: source-cluster
      volume:
        storage: 1Gi
        storageClassName: my-storage-class
  4. Optionally include spec.resources fields to specify which Infinispan resources you want to back up.

    If you do not include any spec.resources fields, the Backup CR creates an archive that contains all Infinispan resources. If you do specify spec.resources fields, the Backup CR creates an archive that contains those resources only.

    spec:
      ...
      resources:
        templates:
          - distributed-sync-prod
          - distributed-sync-dev
        caches:
          - cache-one
          - cache-two
        counters:
          - counter-name
        protoSchemas:
          - authors.proto
          - books.proto
        tasks:
          - wordStream.js

    You can also use the * wildcard character as in the following example:

    spec:
      ...
      resources:
        caches:
          - "*"
        protoSchemas:
          - "*"
  5. Apply your Backup CR.

    $ kubectl apply -f my-backup.yaml
Verification
  1. Check that the status.phase field has a status of Succeeded in the Backup CR and that Infinispan logs have the following message:

    ISPN005044: Backup file created 'my-backup.zip'
  2. Run the following command to check that the backup is successfully created:

    $ kubectl describe Backup my-backup -n namespace

14.3. Restoring Infinispan clusters

Restore Infinispan cluster state from a backup archive.

Prerequisites
  • Create a Backup CR on a source cluster.

  • Create a target Infinispan cluster of Data Grid Service pods.

    If you restore an existing cache, the operation overwrites the data in the cache but not the cache configuration.

    For example, you back up a distributed cache named mycache on the source cluster. You then restore mycache on a target cluster where it already exists as a replicated cache. In this case, the data from the source cluster is restored and mycache continues to have a replicated configuration on the target cluster.

  • Ensure there are no active client connections to the target Infinispan cluster you want to restore.

    Cache entries that you restore from a backup can overwrite more recent cache entries.
    For example, a client performs a cache.put(k=2) operation and you then restore a backup that contains k=1.

Procedure
  1. Name the Restore CR with the metadata.name field.

  2. Specify a Backup CR to use with the spec.backup field.

  3. Specify the Infinispan cluster to restore with the spec.cluster field.

    apiVersion: infinispan.org/v2alpha1
    kind: Restore
    metadata:
      name: my-restore
    spec:
      backup: my-backup
      cluster: target-cluster
  4. Optionally add the spec.resources field to restore specific resources only.

    spec:
      ...
      resources:
        templates:
          - distributed-sync-prod
          - distributed-sync-dev
        caches:
          - cache-one
          - cache-two
        counters:
          - counter-name
        protoSchemas:
          - authors.proto
          - books.proto
        tasks:
          - wordStream.js
  5. Apply your Restore CR.

    $ kubectl apply -f my-restore.yaml
Verification
  • Check that the status.phase field has a status of Succeeded in the Restore CR and that Infinispan logs have the following message:

    ISPN005045: Restore 'my-backup' complete

You should then open the Infinispan Console or establish a CLI connection to verify data and Infinispan resources are restored as expected.

14.4. Backup and restore status

Backup and Restore CRs include a status.phase field that provides the status for each phase of the operation.

Status Description

Initializing

The system has accepted the request and the controller is preparing the underlying resources to create the pod.

Initialized

The controller has prepared all underlying resources successfully.

Running

The pod is created and the operation is in progress on the Infinispan cluster.

Succeeded

The operation has completed successfully on the Infinispan cluster and the pod is terminated.

Failed

The operation did not successfully complete and the pod is terminated.

Unknown

The controller cannot obtain the status of the pod or determine the state of the operation. This condition typically indicates a temporary communication error with the pod.

14.4.1. Handling failed backup and restore operations

If the status.phase field of the Backup or Restore CR is Failed, you should examine pod logs to determine the root cause before you attempt the operation again.

Procedure
  1. Examine the logs for the pod that performed the failed operation.

    Pods are terminated but remain available until you delete the Backup or Restore CR.

    $ kubectl logs <backup|restore_pod_name>
  2. Resolve any error conditions or other causes of failure as indicated by the pod logs.

  3. Create a new instance of the Backup or Restore CR and attempt the operation again.

15. Deploying custom code to Infinispan

Add custom code, such as scripts and event listeners, to your Infinispan clusters.

Before you can deploy custom code to Infinispan clusters, you need to make it available. To do this you can copy artifacts from a persistent volume (PV), download artifacts from an HTTP or FTP server, or use both methods.

15.1. Copying code artifacts to Infinispan clusters

Adding your artifacts to a persistent volume (PV) and then copy them to Infinispan pods.

This procedure explains how to use a temporary pod that mounts a persistent volume claim (PVC) that:

  • Lets you add code artifacts to the PV (perform a write operation).

  • Allows Infinispan pods to load code artifacts from the PV (perform a read operation).

To perform these read and write operations, you need certain PV access modes. However, support for different PVC access modes is platform dependent.

It is beyond the scope of this document to provide instructions for creating PVCs with different platforms. For simplicity, the following procedure shows a PVC with the ReadWriteMany access mode.

In some cases only the ReadOnlyMany or ReadWriteOnce access modes are available. You can use a combination of those access modes by reclaiming and reusing PVCs with the same spec.volumeName.

Using ReadWriteOnce access mode results in all Infinispan pods in a cluster being scheduled on the same Kubernetes node.

Procedure
  1. Change to the namespace for your Infinispan cluster.

    $ kubectl config set-context --current --namespace=ispn-namespace
  2. Create a PVC for your custom code artifacts, for example:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: datagrid-libs
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 100Mi
  3. Apply your PVC.

    $ kubectl apply -f datagrid-libs.yaml
  4. Create a pod that mounts the PVC, for example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: datagrid-libs-pod
    spec:
      securityContext:
        fsGroup: 2000
      volumes:
        - name: lib-pv-storage
          persistentVolumeClaim:
            claimName: datagrid-libs
      containers:
        - name: lib-pv-container
          image: quay.io/infinispan/server:12.1
          volumeMounts:
            - mountPath: /tmp/libs
              name: lib-pv-storage
  5. Add the pod to the Infinispan namespace and wait for it to be ready.

    $ kubectl apply -f datagrid-libs-pod.yaml
    $ kubectl wait --for=condition=ready --timeout=2m pod/datagrid-libs-pod
  6. Copy your code artifacts to the pod so that they are loaded into the PVC.

    For example to copy code artifacts from a local libs directory, do the following:

    $ kubectl cp --no-preserve=true libs datagrid-libs-pod:/tmp/
  7. Delete the pod.

    $ kubectl delete pod datagrid-libs-pod

    Specify the persistent volume with spec.dependencies.volumeClaimName in your Infinispan CR and then apply the changes.

    apiVersion: infinispan.org/v1
    kind: Infinispan
    metadata:
      name: example-infinispan
    spec:
      replicas: 2
      dependencies:
        volumeClaimName: datagrid-libs
      service:
        type: DataGrid

If you update your custom code on the persistent volume, you must restart the Infinispan cluster so it can load the changes.

15.2. Downloading code artifacts

Add your artifacts to an HTTP or FTP server so that Infinispan Operator downloads them to the /opt/infinispan/server/lib directory on each Infinispan node.

When downloading files, Infinispan Operator can automatically detect the file type. Infinispan Operator also extracts archived files, such as zip or tgz, to the filesystem after the download completes.

Each time Infinispan Operator creates a Infinispan node it downloads the artifacts to the node. The download also occurs when Infinispan Operator recreates pods after terminating them.

Prerequisites
  • Host your code artifacts on an HTTP or FTP server.

Procedure
  1. Add the spec.dependencies.artifacts field to your Infinispan CR.

    1. Specify the location of the file to download via HTTP or FTP as the value of the spec.dependencies.artifacts.url field.

    2. Optionally specify a checksum to verify the integrity of the download with the spec.dependencies.artifacts.hash field.

      The hash field requires a value is in the format of <algorithm>:<checksum> where <algorithm> is sha1|sha224|sha256|sha384|sha512|md5.

    3. Set the file type, if necessary, with the spec.dependencies.artifacts.type field.

      You should explicitly set the file type if it is not included in the URL or if the file type is actually different to the extension in the URL.

      If you set type: file, Infinispan Operator downloads the file as-is without extracting it to the filesystem.

      apiVersion: infinispan.org/v1
      kind: Infinispan
      metadata:
        name: example-infinispan
      spec:
        replicas: 2
        dependencies:
          artifacts:
            - url: http://example.com:8080/path
              hash: sha256:596408848b56b5a23096baa110cd8b633c9a9aef2edd6b38943ade5b4edcd686
              type: zip
        service:
          type: DataGrid
  2. Apply the changes.

16. Sending cloud events from Infinispan clusters

Configure Infinispan as a Knative source by sending CloudEvents to Apache Kafka topics.

16.1. Cloud events

You can send CloudEvents from Infinispan clusters when entries in caches are created, updated, removed, or expired.

Infinispan sends structured events to Kafka in JSON format, as in the following example:

{
    "specversion": "1.0",
    "source": "/infinispan/<cluster_name>/<cache_name>",
    "type": "org.infinispan.entry.created",
    "time": "<timestamp>",
    "subject": "<key-name>",
    "id": "key-name:CommandInvocation:node-name:0",
    "data": {
       "property": "value"
    }
}
Field Description

type

Prefixes events for Infinispan cache entries with org.infinispan.entry.

data

Entry value.

subject

Entry key, converted to string.

id

Generated identifier for the event.

16.2. Enabling cloud events

Configure Infinispan to send CloudEvents.

Prerequisites
  • Set up an Kafka cluster that listens for Infinispan topics.

Procedure
  1. Add spec.cloudEvents to your Infinispan CR.

    1. Configure the number of acknowledgements with the spec.cloudEvents.acks field. Values are "0", "1", or "all".

    2. List Kafka servers to which Infinispan sends events with the spec.cloudEvents.bootstrapServers field.

    3. Specify the Kafka topic for Infinispan events with the spec.cloudEvents.cacheEntriesTopic field.

      spec:
        cloudEvents:
          acks: "1"
          bootstrapServers: my-cluster-kafka-bootstrap_1.<namespace_1>.svc:9092,my-cluster-kafka-bootstrap_2.<namespace_2>.svc:9092
          cacheEntriesTopic: target-topic
  2. Apply your changes.

17. Establishing remote client connections

Connect to Infinispan clusters from the Infinispan Console, Command Line Interface (CLI), and remote clients.

17.1. Client connection details

Before you can connect to Infinispan, you need to retrieve the following pieces of information:

  • Service hostname

  • Port

  • Authentication credentials, if required

  • TLS certificate, if you use encryption

Service hostnames

The service hostname depends on how you expose Infinispan on the network or if your clients are running on Kubernetes.

For clients running on Kubernetes, you can use the name of the internal service that Infinispan Operator creates.

For clients running outside Kubernetes, the service hostname is the location URL if you use a load balancer. For a node port service, the service hostname is the node host name. For a route, the service hostname is either a custom hostname or a system-defined hostname.

Ports

Client connections on Kubernetes and through load balancers use port 11222.

Node port services use a port in the range of 30000 to 60000. Routes use either port 80 (unencrypted) or 443 (encrypted).

17.2. Infinispan caches

Cache configuration defines the characteristics and features of the data store and must be valid with the Infinispan schema. Infinispan recommends creating standalone files in XML or JSON format that define your cache configuration. You should separate Infinispan configuration from application code for easier validation and to avoid the situation where you need to maintain XML snippets in Java or some other client language.

To create caches with Infinispan clusters running on Kubernetes, you should:

  • Use Cache CR as the mechanism for creating caches through the Kubernetes front end.

  • Use Batch CR to create multiple caches at a time from standalone configuration files.

  • Access Infinispan Console and create caches in XML or JSON format.

You can use Hot Rod or HTTP clients but Infinispan recommends Cache CR or Batch CR unless your specific use case requires programmatic remote cache creation.

17.3. Connecting the Infinispan CLI

Use the command line interface (CLI) to connect to your Infinispan cluster and perform administrative operations.

Prerequisites
  • Download a CLI distribution so you can connect to Infinispan clusters on OpenShift.

The CLI is available as part of the server distribution. Alternatively, you can use the CLI container image available from infinispan.org/download.

It is possible to open a remote shell to a Infinispan node and access the CLI.

$ kubectl exec -it example-infinispan-0 -- /bin/bash

However using the CLI in this way consumes memory allocated to the container, which can lead to out of memory exceptions.

Procedure
  1. Create a CLI connection to your Infinispan cluster.

    $ bin/cli.sh -c https://$SERVICE_HOSTNAME:$PORT --trustall

    Replace $SERVICE_HOSTNAME:$PORT with the hostname and port where Infinispan is available on the network.

  2. Enter your Infinispan credentials when prompted.

  3. Perform CLI operations as required, for example:

    1. List caches configured on the cluster with the ls command.

      [//containers/default]> ls caches
      mycache
    2. View cache configuration with the describe command.

      [//containers/default]> describe caches/mycache

17.4. Accessing Infinispan Console

Access the console to create caches, perform adminstrative operations, and monitor your Infinispan clusters.

Prerequisites
  • Expose Infinispan on the network so you can access the console through a browser.
    For example, configure a load balancer service or create a route.

Procedure
  • Access the console from any browser at $SERVICE_HOSTNAME:$PORT.

    Replace $SERVICE_HOSTNAME:$PORT with the hostname and port where Infinispan is available on the network.

17.5. Hot Rod clients

Hot Rod is a binary TCP protocol that Infinispan provides for high-performance data transfer capabilities with remote clients.

Client intelligence

Client intelligence refers to mechanisms the Hot Rod protocol provides so that clients can locate and send requests to Infinispan pods.

Hot Rod clients running on Kubernetes can access internal IP addresses for Infinispan pods so you can use any client intelligence. The default intelligence, HASH_DISTRIBUTION_AWARE, is recommended because it allows clients to route requests to primary owners, which improves performance.

Hot Rod clients running outside Kubernetes must use BASIC intelligence.

17.5.1. Hot Rod client configuration API

You can programmatically configure Hot Rod client connections with the ConfigurationBuilder interface.

$SERVICE_HOSTNAME:$PORT denotes the hostname and port that allows access to your Infinispan cluster. You should replace these variables with the actual hostname and port for your environment.

On Kubernetes

Hot Rod clients running on Kubernetes can use the following configuration:

import org.infinispan.client.hotrod.configuration.ConfigurationBuilder;
import org.infinispan.client.hotrod.configuration.SaslQop;
import org.infinispan.client.hotrod.impl.ConfigurationProperties;
...

ConfigurationBuilder builder = new ConfigurationBuilder();
      builder.addServer()
               .host("$SERVICE_HOSTNAME")
               .port(ConfigurationProperties.DEFAULT_HOTROD_PORT)
             .security().authentication()
               .username("username")
               .password("changeme")
               .realm("default")
               .saslQop(SaslQop.AUTH)
               .saslMechanism("SCRAM-SHA-512")
             .ssl()
               .sniHostName("$SERVICE_HOSTNAME")
               .trustStoreFileName("/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt")
               .trustStoreType("pem");
Outside Kubernetes

Hot Rod clients running outside Kubernetes can use the following configuration:

import org.infinispan.client.hotrod.configuration.ClientIntelligence;
import org.infinispan.client.hotrod.configuration.ConfigurationBuilder;
import org.infinispan.client.hotrod.configuration.SaslQop;
...

ConfigurationBuilder builder = new ConfigurationBuilder();
      builder.addServer()
               .host("$SERVICE_HOSTNAME")
               .port("$PORT")
             .security().authentication()
               .username("username")
               .password("changeme")
               .realm("default")
               .saslQop(SaslQop.AUTH)
               .saslMechanism("SCRAM-SHA-512")
             .ssl()
               .sniHostName("$SERVICE_HOSTNAME")
               //Create a client trust store with tls.crt from your project.
               .trustStoreFileName("/path/to/truststore.pkcs12")
               .trustStorePassword("trust_store_password")
               .trustStoreType("PCKS12");
      builder.clientIntelligence(ClientIntelligence.BASIC);

17.5.2. Hot Rod client properties

You can configure Hot Rod client connections with the hotrod-client.properties file on the application classpath.

$SERVICE_HOSTNAME:$PORT denotes the hostname and port that allows access to your Infinispan cluster. You should replace these variables with the actual hostname and port for your environment.

On Kubernetes

Hot Rod clients running on Kubernetes can use the following properties:

# Connection
infinispan.client.hotrod.server_list=$SERVICE_HOSTNAME:$PORT

# Authentication
infinispan.client.hotrod.use_auth=true
infinispan.client.hotrod.auth_username=developer
infinispan.client.hotrod.auth_password=$PASSWORD
infinispan.client.hotrod.auth_server_name=$CLUSTER_NAME
infinispan.client.hotrod.sasl_properties.javax.security.sasl.qop=auth
infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512

# Encryption
infinispan.client.hotrod.sni_host_name=$SERVICE_HOSTNAME
infinispan.client.hotrod.trust_store_file_name=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
infinispan.client.hotrod.trust_store_type=pem
Outside Kubernetes

Hot Rod clients running outside Kubernetes can use the following properties:

# Connection
infinispan.client.hotrod.server_list=$SERVICE_HOSTNAME:$PORT

# Client intelligence
infinispan.client.hotrod.client_intelligence=BASIC

# Authentication
infinispan.client.hotrod.use_auth=true
infinispan.client.hotrod.auth_username=developer
infinispan.client.hotrod.auth_password=$PASSWORD
infinispan.client.hotrod.auth_server_name=$CLUSTER_NAME
infinispan.client.hotrod.sasl_properties.javax.security.sasl.qop=auth
infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512

# Encryption
infinispan.client.hotrod.sni_host_name=$SERVICE_HOSTNAME
# Create a client trust store with tls.crt from your project.
infinispan.client.hotrod.trust_store_file_name=/path/to/truststore.pkcs12
infinispan.client.hotrod.trust_store_password=trust_store_password
infinispan.client.hotrod.trust_store_type=PCKS12

17.5.3. Configuring Hot Rod clients for certificate authentication

If you enable client certificate authentication, clients must present valid certificates when negotiating connections with Infinispan.

Validate strategy

If you use the Validate strategy, you must configure clients with a keystore so they can present signed certificates. You must also configure clients with Infinispan credentials and any suitable authentication mechanism.

Authenticate strategy

If you use the Authenticate strategy, you must configure clients with a keystore that contains signed certificates and valid Infinispan credentials as part of the distinguished name (DN). Hot Rod clients must also use the EXTERNAL authentication mechanism.

If you enable security authorization, you should assign the Common Name (CN) from the client certificate a role with the appropriate permissions.

The following example shows a Hot Rod client configuration for client certificate authentication with the Authenticate strategy:

import org.infinispan.client.hotrod.configuration.ConfigurationBuilder;
...

ConfigurationBuilder builder = new ConfigurationBuilder();
      builder.security()
             .authentication()
               .saslMechanism("EXTERNAL")
             .ssl()
               .keyStoreFileName("/path/to/keystore")
               .keyStorePassword("keystorepassword".toCharArray())
               .keyStoreType("PCKS12");

17.5.4. Creating caches from Hot Rod clients

You can remotely create caches on Infinispan clusters running on Kubernetes with Hot Rod clients. However, Infinispan recommends that you create caches using Infinispan Console, the CLI, or with Cache CRs instead of with Hot Rod clients.

Programmatically creating caches

The following example shows how to add cache configurations to the ConfigurationBuilder and then create them with the RemoteCacheManager:

import org.infinispan.client.hotrod.DefaultTemplate;
import org.infinispan.client.hotrod.RemoteCache;
import org.infinispan.client.hotrod.RemoteCacheManager;
...

      builder.remoteCache("my-cache")
             .templateName(DefaultTemplate.DIST_SYNC);
      builder.remoteCache("another-cache")
             .configuration("<infinispan><cache-container><distributed-cache name=\"another-cache\"><encoding media-type=\"application/x-protostream\"/></distributed-cache></cache-container></infinispan>");
      try (RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build())) {
      // Get a remote cache that does not exist.
      // Rather than return null, create the cache from a template.
      RemoteCache<String, String> cache = cacheManager.getCache("my-cache");
      // Store a value.
      cache.put("hello", "world");
      // Retrieve the value and print it.
      System.out.printf("key = %s\n", cache.get("hello"));

This example shows how to create a cache named CacheWithXMLConfiguration using the XMLStringConfiguration() method to pass the cache configuration as XML:

import org.infinispan.client.hotrod.RemoteCacheManager;
import org.infinispan.commons.configuration.XMLStringConfiguration;
...

private void createCacheWithXMLConfiguration() {
    String cacheName = "CacheWithXMLConfiguration";
    String xml = String.format("<distributed-cache name=\"%s\">" +
                                  "<encoding media-type=\"application/x-protostream\"/>" +
                                  "<locking isolation=\"READ_COMMITTED\"/>" +
                                  "<transaction mode=\"NON_XA\"/>" +
                                  "<expiration lifespan=\"60000\" interval=\"20000\"/>" +
                                "</distributed-cache>"
                                , cacheName);
    manager.administration().getOrCreateCache(cacheName, new XMLStringConfiguration(xml));
    System.out.println("Cache with configuration exists or is created.");
}
Using Hot Rod client properties

When you invoke cacheManager.getCache() calls for named caches that do not exist, Infinispan creates them from the Hot Rod client properties instead of returning null.

Add cache configuration to hotrod-client.properties as in the following example:

# Add cache configuration
infinispan.client.hotrod.cache.my-cache.template_name=org.infinispan.DIST_SYNC
infinispan.client.hotrod.cache.another-cache.configuration=<infinispan><cache-container><distributed-cache name=\"another-cache\"/></cache-container></infinispan>
infinispan.client.hotrod.cache.my-other-cache.configuration_uri=file:/path/to/configuration.xml

17.6. Accessing the REST API

Infinispan provides a RESTful interface that you can interact with using HTTP clients.

Prerequisites
  • Expose Infinispan on the network so you can access the REST API.
    For example, configure a load balancer service or create a route.

Procedure
  • Access the REST API with any HTTP client at $SERVICE_HOSTNAME:$PORT/rest/v2.

    Replace $SERVICE_HOSTNAME:$PORT with the hostname and port where Infinispan is available on the network.

Additional resources

17.7. Adding caches to Cache Service pods

Cache Service pods include a default cache configuration with recommended settings. This default cache lets you start using Infinispan without the need to create caches.

Because the default cache provides recommended settings, you should create caches only as copies of the default. If you want multiple custom caches you should create Data Grid Service pods instead of Cache Service pods.

Procedure
  • Access the Infinispan Console and provide a copy of the default configuration in XML or JSON format.

  • Use the Infinispan CLI to create a copy from the default cache as follows:

    [//containers/default]> create cache --template=default mycache

17.7.1. Default cache configuration

This topic describes default cache configuration for Cache Service pods.

<distributed-cache name="default"
                   mode="SYNC"
                   owners="2">
  <memory storage="OFF_HEAP"
          max-size="<maximum_size_in_bytes>"
          when-full="REMOVE" />
  <partition-handling when-split="ALLOW_READ_WRITES"
                      merge-policy="REMOVE_ALL"/>
</distributed-cache>

Default caches:

  • Use synchronous distribution to store data across the cluster.

  • Create two replicas of each entry on the cluster.

  • Store cache entries as bytes in native memory (off-heap).

  • Define the maximum size for the data container in bytes. Infinispan Operator calculates the maximum size when it creates pods.

  • Evict cache entries to control the size of the data container. You can enable automatic scaling so that Infinispan Operator adds pods when memory usage increases instead of removing entries.

  • Use a conflict resolution strategy that allows read and write operations for cache entries, even if segment owners are in different partitions.

  • Specify a merge policy that removes entries from the cache when Infinispan detects conflicts.