The Infinispan Server container image runs in a dedicated Java Virtual Machine (JVM) and provides client access to remote caches through Hot Rod, REST, Memcached or RESP (Redis) endpoints. Infinispan Server speeds time to deployment by separating caches from application logic and offers built-in capabilities for monitoring and administration.

1. Getting started with the Infinispan Server Container Image

To get started with Infinispan Server container image on your local machine simply execute:

Docker
Unresolved directive in ../../stories/assembly_container_getting_started.adoc - include::cmd_examples/server_docker.adoc[]
Podman
Unresolved directive in ../../stories/assembly_container_getting_started.adoc - include::cmd_examples/server_podman.adoc[]

When utilising [podman](https://podman.io/) it is necessary for the --net=host to be passed when not executing as sudo.

By default, the image has authentication enabled on all exposed endpoints. When executing the above command the image automatically generates a username/password combo with the admin role, prints the values to stdout and then starts the Infinispan server with the authenticated Hotrod and Rest endpoints exposed on port 11222. Therefore, it’s necessary to utilise the printed credentials when accessing the exposed endpoints via clients.

It is also possible to provide an administrator username/password combination via environment variables:

docker run -p 11222:11222 -e USER="admin" -e PASS="changeme" infinispan/server

We recommend utilising the auto-generated credentials or USER & PASS env variables for initial development only. Providing authentication and authorization configuration via an [Identities Batch file](#identities-batch) allows for much greater control.

1.1. Hot Rod Clients

When connecting a Hot Rod client to the image, the following SASL properties must be configured on your client (with the username and password properties changed as required):

infinispan.client.hotrod.auth_username=admin
infinispan.client.hotrod.auth_password=changme
infinispan.client.hotrod.sasl_mechanism=DIGEST-MD5

1.2. Identities Batch

User identities and roles can be defined by providing a cli batch file via the IDENTITIES_BATCH env variable. All the cli commands defined in this file are executed before the server is started, therefore it iss only possible to execute offline commands otherwise the container will fail. For example, including create cache …​ in the batch would fail as it requires a connection to an Infinispan server.

Infinispan provides implicit roles for some users.

[TIP] Check Infinispan [documentation](https://infinispan.org/docs/stable/titles/configuring/configuring.html#default-user-roles_security-authorization) to know more about implicit roles and authorization

Below is an example Identities batch CLI file identities.batch, that defines four users and their role:

user create "Alan Shearer" -p "striker9" -g admin
user create "observer" -p "secret1"
user create "deployer" -p "secret2"
user create "Rigoberta Baldini" -p "secret3" -g monitor

To run the image using a local identities.batch, execute:

docker run -v $(pwd):/user-config -e IDENTITIES_BATCH="/user-config/identities.batch" -p 11222:11222 infinispan/server

1.3. Server Configuration

The Infinispan image passes all container arguments to the created server, therefore it is possible to configure the server in the same manner as a non-containerised deployment.

Below shows how a [docker volume](https://docs.docker.com/storage/volumes/) can be created and mounted in order to run the Infinispan image with the local configuration file my-infinispan-config.xml located in the users current working directory.

docker run -v $(pwd):/user-config -e IDENTITIES_BATCH="/user-config/identities.batch" -p 11222:11222 infinispan/server -c /user-config/my-infinispan-config.xml

1.3.1. Kubernetes/OpenShift Clustering

When running in a managed environment such as Kubernetes, it is not possible to utilise multicasting for initial node discovery, therefore we must utilise the JGroups [DNS_PING](http://jgroups.org/manual4/index.html#_dns_ping) protocol to discover cluster members. To enable this, we must provide the jgroups.dnsPing.query property and configure the kubernetes stack.

To utilise the tcp stack with DNS_PING, execute the following config:

docker run -v $(pwd):/user-config infinispan/server --bind-address=0.0.0.0  -Dinfinispan.cluster.stack=kubernetes -Djgroups.dns.query="infinispan-dns-ping.myproject.svc.cluster.local"

1.3.2. Java Properties

It is possible to provide additional Java properties and JVM options to the server images via the JAVA_OPTIONS env variable. For example, to quickly configure CORS without providing a server.yaml file, do the following:

docker run -e JAVA_OPTIONS="-Dinfinispan.cors.enableAll=https://host.domain:port" infinispan/server

1.3.3. Deploying artifacts to the server lib directory

Deploy artifacts to the server lib directory using the SERVER_LIBS env variable. For example, to add the PostgreSQL JDBC driver to the server:

docker run -e SERVER_LIBS="org.postgresql:postgresql:42.3.1" infinispan/server

The SERVER_LIBS variable supports multiple, space-separated artifacts represented as URLs or as Maven coordinates. Archive artifacts in .tar, .tar.gz or .zip formats will be extracted. Refer to the [CLI](https://infinispan.org/docs/stable/titles/cli/cli.html#install1) install command help to learn about all possible arguments and options.

1.4. Kubernetes

1.4.1. Liveness and Readiness Probes

It is recommended to utilise Infinispan’s REST endpoint in order to determine if the server is ready/live. To do this, you can utilise the Kubernetes [httpGet probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) as follows:

livenessProbe:
  httpGet:
    path: /rest/v2/container/health/status
    port: 11222
  failureThreshold: 5
  initialDelaySeconds: 10
  successThreshold: 1
  timeoutSeconds: 10
readinessProbe:
  httpGet:
    path: /rest/v2/container/health/status
    port: 11222
  failureThreshold: 5
  initialDelaySeconds: 10
  successThreshold: 1
  timeoutSeconds: 10

2. Troubleshooting Infinispan container images

2.1. Image Configuration

The image scripts that are used to configure and launch the executables can be debugged by setting the environment variable DEBUG=TRUE as follows:

 docker run -e DEBUG=true infinispan/<image-name>

2.2. Infinispan Server

It’s also possible to debug the Infinispan Server in the image by setting the DEBUG_PORT environment variable as follows:

docker run -e DEBUG_PORT="*:8787" -p 8787:8787 infinispan/server

2.3. Image Tools

In order to keep the image size as small as possible, we include a minimal set of tools and utilities to help debug issues with the container.

Task Command

Text editor

vi

Get the PID of the Java process

ps -fC java

Get socket/file information

lsof

List all open files excluding network sockets

lsof | grep -v 'IP|IPv4|IPv6|TCP|UDP|sock|FIFO|STRM|unix'

List all TCP sockets

lsof -i TCP

List all UDP sockets

lsof -i UDP

Network configuration

ip

Show unicast routes

ip route

Show multicast routes

ip maddress

| Task | Command | |-----------------------------------------------|-------------------------| | Text editor | vi | | Get the PID of the java process | ps -fC java | | Get socket/file information | lsof | | List all open files excluding network sockets | lsof\|grep -v "IPv[46]" | | List all TCP sockets | ss -t -a | | List all UDP sockets | ss -u -a | | Network configuration | ip | | Show unicast routes | ip route | | Show multicast routes | ip maddress |