Create Infinispan clusters with a Helm chart that lets you specify values for build and deployment configuration.
1. Deploying Infinispan clusters as Helm chart releases
Build, configure, and deploy Infinispan clusters with Helm. Infinispan provides a Helm chart that packages resources for running Infinispan clusters on Kubernetes.
Install the Infinispan chart to create a Helm release, which instantiates a Infinispan cluster in your Kubernetes project.
1.1. Installing the Infinispan chart through the OpenShift console
Use the OpenShift Web Console to install the Infinispan chart from the Red Hat developer catalog. Installing the chart creates a Helm release that deploys a Infinispan cluster.
-
Have access to OpenShift.
-
Log in to the OpenShift Web Console.
-
Select the Developer perspective.
-
Open the Add view and then select Helm Chart to browse the Red Hat developer catalog.
-
Locate and select the Infinispan chart.
-
Specify a name for the chart and select a version.
-
Define values in the following sections of the Infinispan chart:
-
Images configures the container images to use when creating pods for your Infinispan cluster.
-
Deploy configures your Infinispan cluster.
To find descriptions for each value, select the YAML view option and access the schema. Edit the yaml configuration to customize your Infinispan chart.
-
-
Select Install.
-
Select the Helm view in the Developer perspective.
-
Select the Helm release you created to view details, resources, and other information.
1.2. Installing the Infinispan chart on the command line
Use the command line to install the Infinispan chart on Kubernetes and instantiate a Infinispan cluster. Installing the chart creates a Helm release that deploys a Infinispan cluster.
-
Install the
helm
client. -
Add the OpenShift Helm Charts repository.
-
Have access to an OpenShift or Kubernetes cluster.
-
Have a
kubectl
oroc
client.
-
Create a values file that configures your Infinispan cluster.
For example, the following values file creates a cluster with two nodes:
$ cat > infinispan-values.yaml<<EOF #Build configuration images: server: quay.io/infinispan/server:latest initContainer: registry.access.redhat.com/ubi8-micro #Deployment configuration deploy: #Add a user with full security authorization. security: batch: "user create admin -p changeme" #Create a cluster with two pods. replicas: 2 #Specify the internal Kubernetes cluster domain. clusterDomain: cluster.local EOF
You can find descriptions and values for each field in the Infinispan chart README.
-
Install the Infinispan chart and specify your values file.
$ helm install infinispan openshift-helm-charts/infinispan-infinispan --values infinispan-values.yaml
Use the
|
Watch the pods to ensure all nodes in the Infinispan cluster are created successfully.
$ kubectl get pods -w
1.3. Upgrading Infinispan Helm releases
Modify your Infinispan cluster configuration at runtime by upgrading Helm releases.
-
Deploy the Infinispan chart.
-
Have a
helm
client. -
Have a
kubectl
oroc
client.
-
Modify the values file for your Infinispan deployment as appropriate.
-
Use the
helm
client to apply your changes, for example:$ helm upgrade infinispan openshift-helm-charts/infinispan-infinispan --values infinispan-values.yaml
Watch the pods rebuild to ensure all changes are applied to your Infinispan cluster successfully.
$ kubectl get pods -w
1.4. Uninstalling Infinispan Helm releases
Uninstall a release of the Infinispan chart to remove pods and other deployment artifacts.
This procedure shows you how to uninstall a Infinispan deployment on the command line but you can use the OpenShift Web Console instead. Refer to the OpenShift documentation for specific instructions. |
-
Deploy the Infinispan chart.
-
Have a
helm
client. -
Have a
kubectl
oroc
client.
-
List the installed Infinispan Helm releases.
$ helm list
-
Use the
helm
client to uninstall a release and remove the Infinispan cluster:$ helm uninstall <helm_release_name>
-
Use the
kubectl
client to remove the generated secret.$ kubectl delete secret <helm_release_name>-generated-secret
1.5. Deployment configuration values
Deployment configuration values let you customize Infinispan clusters.
You can also find field and value descriptions in the Infinispan chart README. |
Field | Description | Default value |
---|---|---|
|
Specifies the internal Kubernetes cluster domain. |
|
|
Specifies the number of nodes in your Infinispan cluster, with a pod created for each node. |
|
|
Passes JVM options to Infinispan Server. |
No default value. |
|
Libraries to be downloaded before server startup. Specify multiple, space-separated artifacts represented as URLs or as Maven coordinates. Archive artifacts in .tar, .tar.gz or .zip formats will be extracted. |
No default value. |
|
Defines whether storage is ephemeral or permanent. |
The default value is |
|
Defines how much storage is allocated to each Infinispan pod. |
1Gi |
|
Specifies the name of a |
No default value. By default, the persistent volume claim uses the storage class that has the |
|
Defines the CPU limit, in CPU units, for each Infinispan pod. |
500m |
|
Defines the maximum amount of memory, in bytes, for each Infinispan pod. |
512Mi |
|
Specifies the maximum CPU requests, in CPU units, for each Infinispan pod. |
500m |
|
Specifies the maximum memory requests, in bytes, for each Infinispan pod. |
512Mi |
|
Specifies the name of a secret that creates credentials and configures security authorization. |
No default value.
If you create a custom security secret then |
|
Provides a batch file for the Infinispan command line interface (CLI) to create credentials and configure security authorization at startup. |
No default value. |
|
Specifies the service that exposes Hot Rod and REST endpoints on the network and provides access to your Infinispan cluster, including the Infinispan Console. |
|
|
Specifies a network port for node port services within the default range of 30000 to 32767. |
0 If you do not specify a port, the platform selects an available one. |
|
Optionally specifies the hostname where the Route is exposed. |
No default value. |
|
Adds annotations to the service that exposes Infinispan on the network. |
No default value. |
|
Configures Infinispan cluster log categories and levels. |
No default value. |
|
Adds labels to each Infinispan pod that you create. |
No default value. |
|
Adds labels to each service that you create. |
No default value. |
|
Adds labels to all Infinispan resources including pods and services. |
No default value. |
|
Allows write access to the |
|
|
Configures the securityContext used by the StatefulSet pods. |
|
|
Enable or disable monitoring using |
|
|
Specifies a name for all Infinispan cluster resources. |
Helm Chart release name. |
|
Defines the nodeAffinity policy used by the cluster’s StatefulSet |
|
|
Defines the podAffinity policy used by the cluster’s StatefulSet |
|
|
Defines the podAntiAffinity policy used by the cluster’s StatefulSet |
|
|
Infinispan Server configuration. |
Infinispan provides default server configuration. For more information about configuring server instances, see Infinispan Server configuration values. |
2. Configuring Infinispan Servers
Apply custom Infinispan Server configuration to your deployments.
2.1. Customizing Infinispan Server configuration
Apply custom deploy.infinispan
values to Infinispan clusters that configure the Cache Manager and underlying server mechanisms like security realms or Hot Rod and REST endpoints.
You must always provide a complete Infinispan Server configuration when you modify |
Do not modify or remove the default "metrics" configuration if you want to use monitoring capabilities for your Infinispan cluster. |
Modify Infinispan Server configuration as required:
-
Specify configuration values for the Cache Manager with
deploy.infinispan.cacheContainer
fields.For example, you can create caches at startup with any Infinispan configuration or add cache templates and use them to create caches on demand.
-
Configure security authorization to control user roles and permissions with the
deploy.infinispan.cacheContainer.security.authorization
field. -
Select one of the default JGroups stacks or configure cluster transport with the
deploy.infinispan.cacheContainer.transport
fields. -
Configure Infinispan Server endpoints with the
deploy.infinispan.server.endpoints
fields. -
Configure Infinispan Server network interfaces and ports with the
deploy.infinispan.server.interfaces
anddeploy.infinispan.server.socketBindings
fields. -
Configure Infinispan Server security mechanisms with the
deploy.infinispan.server.security
fields.
2.2. Infinispan Server configuration values
Infinispan Server configuration values let you customize the Cache Manager and modify server instances that run in Kubernetes pods.
deploy:
infinispan:
cacheContainer:
# [USER] Add cache, template, and counter configuration.
name: default
# [USER] Specify `security: null` to disable security authorization.
security:
authorization: {}
transport:
cluster: ${infinispan.cluster.name:cluster}
node-name: ${infinispan.node.name:}
stack: kubernetes
server:
endpoints:
# [USER] Hot Rod and REST endpoints.
- securityRealm: default
socketBinding: default
# [METRICS] Metrics endpoint for cluster monitoring capabilities.
- connectors:
rest:
restConnector:
authentication:
mechanisms: BASIC
securityRealm: metrics
socketBinding: metrics
interfaces:
- inetAddress:
value: ${infinispan.bind.address:127.0.0.1}
name: public
security:
credentialStores:
- clearTextCredential:
clearText: secret
name: credentials
path: credentials.pfx
securityRealms:
# [USER] Security realm for the Hot Rod and REST endpoints.
- name: default
# [USER] Comment or remove this properties realm to disable authentication.
propertiesRealm:
groupProperties:
path: groups.properties
groupsAttribute: Roles
userProperties:
path: users.properties
# [METRICS] Security realm for the metrics endpoint.
- name: metrics
propertiesRealm:
groupProperties:
path: metrics-groups.properties
relativeTo: infinispan.server.config.path
groupsAttribute: Roles
userProperties:
path: metrics-users.properties
plainText: true
relativeTo: infinispan.server.config.path
socketBindings:
defaultInterface: public
portOffset: ${infinispan.socket.binding.port-offset:0}
socketBinding:
# [USER] Socket binding for the Hot Rod and REST endpoints.
- name: default
port: 11222
# [METRICS] Socket binding for the metrics endpoint.
- name: metrics
port: 11223
deploy:
infinispan:
cacheContainer:
distributedCache:
name: "mycache"
mode: "SYNC"
owners: "2"
segments: "256"
capacityFactor: "1.0"
statistics: "true"
encoding:
mediaType: "application/x-protostream"
expiration:
lifespan: "5000"
maxIdle: "1000"
memory:
maxCount: "1000000"
whenFull: "REMOVE"
partitionHandling:
whenSplit: "ALLOW_READ_WRITES"
mergePolicy: "PREFERRED_NON_NULL"
#Provide additional Cache Manager configuration.
server:
#Provide configuration for server instances.
deploy:
infinispan:
cacheContainer:
distributedCacheConfiguration:
name: "my-dist-template"
mode: "SYNC"
statistics: "true"
encoding:
mediaType: "application/x-protostream"
expiration:
lifespan: "5000"
maxIdle: "1000"
memory:
maxCount: "1000000"
whenFull: "REMOVE"
#Provide additional Cache Manager configuration.
server:
#Provide configuration for server instances.
deploy:
infinispan:
cacheContainer:
transport:
#Specifies the name of a default JGroups stack.
stack: kubernetes
#Provide additional Cache Manager configuration.
server:
#Provide configuration for server instances.
3. Configuring authentication and authorization
Control access to Infinispan clusters by adding credentials and assigning roles with different permissions.
3.1. Default credentials
Infinispan adds default credentials in a <helm_release_name>-generated-secret
secret.
Username | Description |
---|---|
|
User that has the |
|
Internal user that has the |
3.1.1. Retrieving credentials
Get Infinispan credentials from authentication secrets.
-
Install the Infinispan Helm chart.
-
Have a
kubectl
oroc
client.
-
Retrieve default credentials from the
<helm_release_name>-generated-secret
or custom credentials from another secret with the following command:$ kubectl get secret <helm_release_name>-generated-secret \ -o jsonpath="{.data.identities-batch}" | base64 --decode
3.2. Adding custom user credentials or credentials store
Create Infinispan user credentials and assign roles that grant security authorization for cluster access.
-
Create credentials by specifying the
user create
command in thedeploy.security.batch
field.User with implicit authorizationdeploy: security: batch: 'user create admin -p changeme'
User with a specific roledeploy: security: batch: 'user create personone -p changeme -g deployer'
3.2.1. User roles and permissions
Infinispan uses role-based access control to authorize users for access to cluster resources and data. For additional security, you should grant Infinispan users with appropriate roles when you add credentials.
Role | Permissions | Description |
---|---|---|
|
ALL |
Superuser with all permissions including control of the Cache Manager lifecycle. |
|
ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR, CREATE |
Can create and delete Infinispan resources in addition to |
|
ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR |
Has read and write access to Infinispan resources in addition to |
|
ALL_READ, MONITOR |
Has read access to Infinispan resources in addition to |
|
MONITOR |
Can view statistics for Infinispan clusters. |
3.2.2. Adding credentials store
Create Infinispan credentials store to avoid exposing passwords in clear text in the server configuration ConfigMap. See Enabling TLS encryption for a use case.
-
Create credentials store by specifying a
credentials add
command in thedeploy.security.batch
field.Add a password to a storedeploy: security: batch: 'credentials add keystore -c password -p secret --path="credentials.pfx"'
-
Credentials store needs then to be added to the server configuration.
Configure a credential storedeploy: infinispan: server: security: credentialStores: - name: credentials path: credentials.pfx clearTextCredential: clearText: "secret"
3.2.3. Adding multiple credentials with authentication secrets
Add multiple credentials to Infinispan clusters with authentication secrets.
-
Have a
kubectl
oroc
client.
-
Create an
identities-batch
file that contains the commands to add your credentials.apiVersion: v1 kind: Secret metadata: name: connect-secret type: Opaque stringData: # The "monitor" user authenticates with the Prometheus ServiceMonitor. username: monitor # The password for the "monitor" user. password: password # The key must be 'identities-batch'. # The content is "user create" commands for the Infinispan CLI. identities-batch: |- user create user1 -p changeme -g admin user create user2 -p changeme -g deployer user create monitor -p password --users-file metrics-users.properties --groups-file metrics-groups.properties credentials add keystore -c password -p secret --path="credentials.pfx"
-
Create an authentication secret from your
identities-batch
file.$ kubectl apply -f identities-batch.yaml
-
Specify the authentication secret in the
deploy.security.SecretName
field.deploy: security: authentication: true secretName: 'connect-secret'
-
Install or upgrade your Infinispan Helm release.
3.3. Disabling authentication
Allow users to access Infinispan clusters and manipulate data without providing credentials.
Do not disable authentication if endpoints are accessible from outside the Kubernetes cluster. You should disable authentication for development environments only. |
-
Remove the
propertiesRealm
fields from the "default" security realm. -
Install or upgrade your Infinispan Helm release.
3.4. Disabling security authorization
Allow Infinispan users to perform any operation regardless of their role.
-
Set
null
as the value for thedeploy.infinispan.cacheContainer.security
field.Use the
--set deploy.infinispan.cacheContainer.security=null
argument with thehelm
client. -
Install or upgrade your Infinispan Helm release.
4. Configuring encryption
Configure encryption for your Infinispan.
4.1. Enabling TLS encryption
Encryption can be independently enabled for endpoint and cluster transport.
-
A secret containing a certificate or a keystore. Endpoint and cluster should use different secrets.
-
A credentials keystore containing any password needed to access the keystore. See Adding credentials keystore.
-
Set the secret name in the deploy configuration.
Provide the name of the secret containing the keystore:
deploy: ssl: endpointSecretName: "tls-secret" transportSecretName: "tls-transport-secret"
-
Enable cluster transport TLS.
deploy: infinispan: cacheContainer: transport: urn:infinispan:server:15.0:securityRealm: > "cluster-transport" (1) server: security: securityRealms: - name: cluster-transport serverIdentities: ssl: keystore: (2) alias: "server" path: "/etc/encrypt/transport/cert.p12" credentialReference: (3) store: credentials alias: keystore truststore: (4) path: "/etc/encrypt/transport/cert.p12" credentialReference: (3) store: credentials alias: truststore
1 Configures the transport stack to use the specified security-realm to provide cluster encryption. 2 Configure the keystore path in the transport realm. The secret is mounted at /etc/encrypt/transport
.3 Configures the truststore with the same keystore allowing the nodes to authenticate each other. 4 Alias and password must be provided in case the secret contains a keystore. -
Enable endpoint TLS.
deploy: infinispan: server: security: securityRealms: - name: default serverIdentities: ssl: keystore: path: "/etc/encrypt/endpoint/keystore.p12" (1) alias: "server" (2) credentialReference: store: credentials (3) alias: keystore (3)
1 Configure the keystore path in the endpoint realm; secret is mounted at /etc/encrypt/endpoint
.2 Alias must be provided in case the secret contains a keystore. 3 Any password must be provided with via credentials keystore.
5. Configuring network access
Configure network access for your Infinispan deployment and find out about internal network services.
5.1. Exposing Infinispan clusters on the network
Make Infinispan clusters available on the network so you can access Infinispan Console as well as REST and Hot Rod endpoints. By default, the Infinispan chart exposes deployments through a Route but you can configure it to expose clusters via Load Balancer or Node Port. You can also configure the Infinispan chart so that deployments are not exposed on the network and only available internally to the Kubernetes cluster.
-
Specify one of the following for the
deploy.expose.type
field:Option Description Route
Exposes Infinispan through an ingress. This is the default value.
LoadBalancer
Exposes Infinispan through a load balancer service.
NodePort
Exposes Infinispan through a node port service.
""
(empty value)Disables exposing Infinispan on the network.
-
Optionally specify a hostname with the
deploy.expose.host
field if you expose Infinispan through an ingress. -
Optionally specify a port with the
deploy.expose.nodePort
field if you expose Infinispan through a node port service. -
Install or upgrade your Infinispan Helm release.
5.2. Retrieving network service details
Get network service details so you can connect to Infinispan clusters.
-
Expose your Infinispan cluster on the network.
-
Have a
kubectl
oroc
client.
Use one of the following commands to retrieve network service details:
-
If you expose Infinispan through an ingress:
$ kubectl get ingress
-
If you expose Infinispan through a load balancer or node port service:
$ kubectl get services
5.3. Network services
The Infinispan chart creates default network services for internal access.
Service | Port | Protocol | Description |
---|---|---|---|
|
|
TCP |
Provides access to Infinispan Hot Rod and REST endpoints. |
|
|
TCP |
Provides access to Infinispan metrics. |
|
|
TCP |
Allows Infinispan pods to discover each other and form clusters. |
You can retrieve details about internal network services as follows:
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
infinispan ClusterIP 192.0.2.0 <none> 11222/TCP,11223/TCP
infinispan-ping ClusterIP None <none> 8888/TCP
6. Connecting to Infinispan clusters
After you configure and deploy Infinispan clusters you can establish remote connections through the Infinispan Console, command line interface (CLI), Hot Rod client, or REST API.
6.1. Accessing Infinispan Console
Access the console to create caches, perform adminstrative operations, and monitor your Infinispan clusters.
-
Expose your Infinispan cluster on the network.
-
Retrieve network service details.
-
Access Infinispan Console from any browser at
$SERVICE_HOSTNAME:$PORT
.Replace
$SERVICE_HOSTNAME:$PORT
with the hostname and port where Infinispan is available on the network.
6.2. Connecting with the command line interface (CLI)
Use the Infinispan CLI to connect to clusters and create caches, manipulate data, and perform administrative operations.
-
Expose your Infinispan cluster on the network.
-
Retrieve network service details.
-
Download the native Infinispan CLI distribution from infinispan-quarkus releases.
-
Extract the
.zip
archive for the native Infinispan CLI distribution to your host filesystem.
-
Start the Infinispan CLI with the network service as the value for the
-c
argument, for example:$ infinispan-cli -c http://cluster-name-myroute.hostname.net/
-
Enter your Infinispan credentials when prompted.
-
Perform CLI operations as required.
Press the tab key or use the
--help
argument to view available options and help text. -
Use the
quit
command to exit the CLI.
6.3. Connecting Hot Rod clients running on Kubernetes
Access remote caches with Hot Rod clients running on the same Kubernetes cluster as your Infinispan cluster.
-
Retrieve network service details.
-
Specify the internal network service detail for your Infinispan cluster in the client configuration.
In the following configuration examples,
$SERVICE_HOSTNAME:$PORT
denotes the hostname and port that allows access to your Infinispan cluster. -
Specify your credentials so the client can authenticate with Infinispan.
-
Configure client intelligence, if required.
Hot Rod clients running on Kubernetes can use any client intelligence because they can access internal IP addresses for Infinispan pods.
The default intelligence,HASH_DISTRIBUTION_AWARE
, is recommended because it allows clients to route requests to primary owners, which improves performance.
Programmatic configuration
import org.infinispan.client.hotrod.configuration.ConfigurationBuilder;
import org.infinispan.client.hotrod.configuration.SaslQop;
import org.infinispan.client.hotrod.impl.ConfigurationProperties;
...
ConfigurationBuilder builder = new ConfigurationBuilder();
builder.addServer()
.host("$SERVICE_HOSTNAME")
.port(ConfigurationProperties.DEFAULT_HOTROD_PORT)
.security().authentication()
.username("username")
.password("changeme")
.realm("default")
.saslQop(SaslQop.AUTH)
.saslMechanism("SCRAM-SHA-512");
Hot Rod client properties
# Connection
infinispan.client.hotrod.server_list=$SERVICE_HOSTNAME:$PORT
# Authentication
infinispan.client.hotrod.use_auth=true
infinispan.client.hotrod.auth_username=developer
infinispan.client.hotrod.auth_password=$PASSWORD
infinispan.client.hotrod.auth_server_name=$CLUSTER_NAME
infinispan.client.hotrod.sasl_properties.javax.security.sasl.qop=auth
infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512
6.3.1. Obtaining IP addresses for all Infinispan pods
You can retrieve a list of all IP addresses for running Infinispan pods.
Connecting Hot Rod clients running on Kubernetes is the recommended approach as it ensures the initial connection to one of the available pods. |
Obtain all the IP addresses for a running Infinispan pods in the following ways:
-
Using the Kubernetes API:
-
Access
${APISERVER}/api/v1/namespaces/<chart-namespace>/endpoints/<helm-release-name>
to retrieve theendpoints
Kubernetes resource associated with the<helm-release-name>
service.
-
-
Using the Kubernetes DNS service:
-
Query the DNS service for the name
<helm-release-name>-ping
to obtain IPs for all the nodes in a cluster.
-
6.4. Connecting Hot Rod clients running outside Kubernetes
Access remote caches with Hot Rod clients running externally to the Kubernetes cluster where you deploy your Infinispan cluster.
-
Expose your Infinispan cluster on the network.
-
Retrieve network service details.
-
Specify the internal network service detail for your Infinispan cluster in the client configuration.
In the following configuration examples,
$SERVICE_HOSTNAME:$PORT
denotes the hostname and port that allows access to your Infinispan cluster. -
Specify your credentials so the client can authenticate with Infinispan.
-
Configure clients to use
BASIC
intelligence.
Programmatic configuration
import org.infinispan.client.hotrod.configuration.ClientIntelligence;
import org.infinispan.client.hotrod.configuration.ConfigurationBuilder;
import org.infinispan.client.hotrod.configuration.SaslQop;
...
ConfigurationBuilder builder = new ConfigurationBuilder();
builder.addServer()
.host("$SERVICE_HOSTNAME")
.port("$PORT")
.security().authentication()
.username("username")
.password("changeme")
.realm("default")
.saslQop(SaslQop.AUTH)
.saslMechanism("SCRAM-SHA-512");
builder.clientIntelligence(ClientIntelligence.BASIC);
Hot Rod client properties
# Connection
infinispan.client.hotrod.server_list=$SERVICE_HOSTNAME:$PORT
# Client intelligence
infinispan.client.hotrod.client_intelligence=BASIC
# Authentication
infinispan.client.hotrod.use_auth=true
infinispan.client.hotrod.auth_username=developer
infinispan.client.hotrod.auth_password=$PASSWORD
infinispan.client.hotrod.auth_server_name=$CLUSTER_NAME
infinispan.client.hotrod.sasl_properties.javax.security.sasl.qop=auth
infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512
6.5. Accessing the REST API
Infinispan provides a RESTful interface that you can interact with using HTTP clients.
-
Expose your Infinispan cluster on the network.
-
Retrieve network service details.
-
Access the REST API with any HTTP client at
$SERVICE_HOSTNAME:$PORT/rest/v2
.Replace
$SERVICE_HOSTNAME:$PORT
with the hostname and port where Infinispan is available on the network.