The Infinispan Server container image runs in a dedicated Java Virtual Machine (JVM) and provides client access to remote caches through Hot Rod, REST, Memcached or RESP (Redis) endpoints. Infinispan Server speeds time to deployment by separating caches from application logic and offers built-in capabilities for monitoring and administration.

1. Getting started with the Infinispan Server Container Image

1.1. Infinispan Server Container Image

Infinispan Server as a container image requires a container manager, such as Docker or Podman.

1.1.1. Container registries

The Infinispan Server container image is available at the following registries:

Registry URL

Docker Hub

https://hub.docker.com/r/infinispan/server

Quay.io

https://quay.io/repository/infinispan/server

1.1.2. Container execution

Start an instance of Infinispan Server by executing the following command:

Docker
docker run -p 11222:11222 --name infinispan infinispan/server
Podman
podman run -p 11222:11222 --net=host --name infinispan infinispan/server

When utilising podman it is necessary for the --net=host to be passed when not executing as sudo.

By default, the image has authentication enabled on all exposed endpoints. When executing the above command the image automatically generates a username/password pair with the admin role, prints the values to stdout and then starts the Infinispan server with the authenticated endpoints exposed on port 11222. Therefore, it’s necessary to utilise the printed credentials when accessing the exposed endpoints via clients.

It is also possible to provide an administrator username/password combination via environment variables:

Docker
docker run -p 11222:11222 -e USER="admin" -e PASS="changeme" --name infinispan infinispan/server
Podman
podman run -p 11222:11222 -e USER="admin" -e PASS="changeme" --net=host --name infinispan infinispan/server

We recommend utilising the auto-generated credentials or USER & PASS env variables for initial development only. Providing authentication and authorization configuration via an [Identities Batch file](#identities-batch) allows for much greater control.

1.1.3. Hot Rod Clients

When connecting a Hot Rod client to the image, the following SASL properties must be configured on your client (with the username and password properties changed as required):

infinispan.client.hotrod.auth_username=admin
infinispan.client.hotrod.auth_password=changme
infinispan.client.hotrod.sasl_mechanism=DIGEST-MD5

1.1.4. Identities Batch

User identities and roles can be defined by providing a cli batch file via the IDENTITIES_BATCH env variable. All the cli commands defined in this file are executed before the server is started, therefore it iss only possible to execute offline commands otherwise the container will fail. For example, including create cache …​ in the batch would fail as it requires a connection to an Infinispan server.

Infinispan provides implicit roles for some users.

to know more about implicit roles and authorization

Below is an example Identities batch CLI file identities.batch, that defines four users and their role:

user create "Alan Shearer" -p "striker9" -g admin
user create "observer" -p "secret1"
user create "deployer" -p "secret2"
user create "Rigoberta Baldini" -p "secret3" -g monitor

To run the image using a local identities.batch, execute:

Docker
docker run -v $(pwd):/user-config -e IDENTITIES_BATCH="/user-config/identities.batch" -p 11222:11222 --name infinispan infinispan/server
Podman
podman run -v $(pwd):/user-config -e IDENTITIES_BATCH="/user-config/identities.batch" -p 11222:11222 --net=host --name infinispan infinispan/server

1.1.5. Server Configuration

The Infinispan image passes all container arguments to the created server, therefore it is possible to configure the server in the same manner as a non-containerised deployment.

Below shows how a docker volume can be created and mounted in order to run the Infinispan image with the local configuration file my-infinispan-config.xml located in the users current working directory.

Docker
docker run -v $(pwd):/user-config -e IDENTITIES_BATCH="/user-config/identities.batch" -p 11222:11222 --name infinispan infinispan/server -c /user-config/my-infinispan-config.xml
Podman
podman run -v $(pwd):/user-config -e IDENTITIES_BATCH="/user-config/identities.batch" -p 11222:11222 --net=host --name infinispan infinispan/server -c /user-config/my-infinispan-config.xml
Kubernetes/OpenShift Clustering

When running in a managed environment such as Kubernetes, it is not possible to utilise multicasting for initial node discovery, therefore we must utilise the JGroups DNS_PING protocol to discover cluster members. To enable this, we must provide the jgroups.dnsPing.query property and configure the kubernetes stack.

To utilise the tcp stack with DNS_PING, execute the following config:

Docker
docker run -v $(pwd):/user-config --name infinispan infinispan/server --bind-address=0.0.0.0  -Dinfinispan.cluster.stack=kubernetes -Djgroups.dns.query="infinispan-dns-ping.myproject.svc.cluster.local"
Podman
podman run -v $(pwd):/user-config --name infinispan infinispan/server --bind-address=0.0.0.0  -Dinfinispan.cluster.stack=kubernetes -Djgroups.dns.query="infinispan-dns-ping.myproject.svc.cluster.local"
Java Properties

It is possible to provide additional Java properties and JVM options to the server images via the JAVA_OPTIONS env variable. For example, to quickly configure CORS without providing a server.yaml file, do the following:

Docker
docker run -e JAVA_OPTIONS="-Dinfinispan.cors.enableAll=https://host.domain:port" --name infinispan infinispan/server
Podman
podman run -e JAVA_OPTIONS="-Dinfinispan.cors.enableAll=https://host.domain:port" --net=host --name infinispan infinispan/server
Using JAVA_OPTIONS will append the options to those determined by the server launch script, such as those that configure the JVM memory sizing. You can completely override these options by setting the JAVA_OPTS env variable.
Deploying artifacts to the server lib directory

Deploy artifacts to the server lib directory using the SERVER_LIBS env variable. For example, to add the PostgreSQL JDBC driver to the server:

Docker
docker run -e SERVER_LIBS="org.postgresql:postgresql:42.3.1" --name infinispan infinispan/server
Podman
podman run -e SERVER_LIBS="org.postgresql:postgresql:42.3.1" --name infinispan infinispan/server

The SERVER_LIBS variable supports multiple, space-separated artifacts represented as URLs or as Maven coordinates. Archive artifacts in .tar, .tar.gz or .zip formats will be extracted. Refer to the CLI install command help to learn about all possible arguments and options.

1.1.6. Kubernetes

Liveness and Readiness Probes

It is recommended to utilise Infinispan’s REST endpoint in order to determine if the server is ready/live. To do this, you can utilise the Kubernetes httpGet probes as follows:

livenessProbe:
  httpGet:
    path: /rest/v2/cache-managers/default/health/status
    port: 11222
  failureThreshold: 5
  initialDelaySeconds: 10
  successThreshold: 1
  timeoutSeconds: 10
readinessProbe:
  httpGet:
    path: /rest/v2/cache-managers/default/health/status
    port: 11222
  failureThreshold: 5
  initialDelaySeconds: 10
  successThreshold: 1
  timeoutSeconds: 10

2. Troubleshooting Infinispan container images

2.1. Image Configuration

The image scripts that are used to configure and launch the executables can be debugged by setting the environment variable DEBUG=TRUE as follows:

Docker
docker run -e DEBUG=true infinispan/<image-name>
Podman
podman run -e DEBUG=true infinispan/<image-name>

2.2. Infinispan Server

It’s also possible to debug the Infinispan Server in the image by setting the DEBUG_PORT environment variable as follows:

Docker
docker run -e DEBUG_PORT="*:8787" -p 8787:8787 infinispan/server
Podman
podman run -e DEBUG_PORT="*:8787" -p 8787:8787 infinispan/server

2.3. Image Tools

In order to keep the image size as small as possible, we include a minimal set of tools and utilities to help debug issues with the container.

Task Command

Text editor

vi

Get the PID of the Java process

ps -fC java

Get socket/file information

lsof

List all open files excluding network sockets

lsof | grep -v 'IP|IPv4|IPv6|TCP|UDP|sock|FIFO|STRM|unix'

List all TCP sockets

lsof -i TCP

List all UDP sockets

lsof -i UDP

Network configuration

ip

Show unicast routes

ip route

Show multicast routes

ip maddress