Tuesday, 16 June 2020

Infinispan Native Server Image

Starting with Infinispan 11, it’s now possible to create a natively compiled version of the Infinispan server.

TL;DR

We have a new image that contains a natively compiled Infinispan server and has a footprint of only 286MB. Try it now:

docker run -p 11222:11222 quay.io/infinispan/server-native:11.0

Infinispan Quarkus Extensions

Quarkus provides built in support for generating native executables, providing several abstractions to improve the development experience of creating native binaries. Building upon the new server, the Infinispan team have created a Quarkus extension for both embedded and server use-cases. These extensions allow a native binary version of the server to be compiled and ran by simply executing:

mvn clean install -Dnative
./server-runner/target/infinispan-quarkus-server-runner-11.0.0.Final-runner
    -Dquarkus.infinispan-server.config-file=infinispan.xml \
    -Dquarkus.infinispan-server.config-path=server/conf \
    -Dquarkus.infinispan-server.data-path=data \
    -Dquarkus.infinispan-server.server-path=/opt/infinispan &

Native Server Image

For many developers compiling your own Infinispan native binary manually is not desirable, therefore we provide the infinispan/server-native image that uses a native server binary. The advantage of this over our JVM based infinispan/server image is that we can no provide a much smaller image, 286 vs 468 MB, as we no longer need to include an openjdk JVM in the image.

The server-native image is configured exactly the same as the JVM based infinispan/server image. We can run an authenticated Infinispan server with a single user with the following command:

docker run -p 11222:11222 -e USER="user" -e PASS="pass" quay.io/infinispan/server-native:11.0

From the output below, you can see the Quarkus banner as well various io.quarkus logs indicating which extensions are being used.

################################################################################
#                                                                              #
# IDENTITIES_PATH not specified                                                #
# Generating Identities yaml using USER and PASS env vars.                     #
################################################################################
2020-06-16 09:27:39,638 INFO  [io.quarkus] (main) config-generator 2.0.0.Final native (powered by Quarkus 1.5.0.Final) started in 0.069s.
2020-06-16 09:27:39,643 INFO  [io.quarkus] (main) Profile prod activated.
2020-06-16 09:27:39,643 INFO  [io.quarkus] (main) Installed features: [cdi, qute]
2020-06-16 09:27:39,671 INFO  [io.quarkus] (main) config-generator stopped in 0.001s
2020-06-16 09:27:40,306 INFO  [ListenerBean] (main) The application is starting...
2020-06-16 09:27:40,481 INFO  [org.inf.CONTAINER] (main) ISPN000128: Infinispan version: Infinispan 'Corona Extra' 11.0.0.Final
2020-06-16 09:27:40,489 INFO  [org.inf.CLUSTER] (main) ISPN000078: Starting JGroups channel infinispan with stack image-tcp
2020-06-16 09:27:45,560 INFO  [org.inf.CLUSTER] (main) ISPN000094: Received new cluster view for channel infinispan: [82914efa63fe-12913|0] (1) [82914efa63fe-12913]
2020-06-16 09:27:45,562 INFO  [org.inf.CLUSTER] (main) ISPN000079: Channel infinispan local address is 82914efa63fe-12913, physical addresses are [10.0.2.100:7800]
2020-06-16 09:27:45,566 INFO  [org.inf.CONTAINER] (main) ISPN000390: Persisted state, version=11.0.0.Final timestamp=2020-06-16T09:27:45.563303Z
2020-06-16 09:27:45,584 INFO  [org.inf.CONTAINER] (main) ISPN000104: Using EmbeddedTransactionManager
2020-06-16 09:27:45,617 INFO  [org.inf.SERVER] (ForkJoinPool.commonPool-worker-3) ISPN080018: Protocol HotRod (internal)
2020-06-16 09:27:45,618 INFO  [org.inf.SERVER] (main) ISPN080018: Protocol REST (internal)
2020-06-16 09:27:45,629 INFO  [org.inf.SERVER] (main) ISPN080004: Protocol SINGLE_PORT listening on 10.0.2.100:11222
2020-06-16 09:27:45,629 INFO  [org.inf.SERVER] (main) ISPN080034: Server '82914efa63fe-12913' listening on http://10.0.2.100:11222
2020-06-16 09:27:45,629 INFO  [org.inf.SERVER] (main) ISPN080001: Infinispan Server 11.0.0.Final started in 5457ms
2020-06-16 09:27:45,629 INFO  [io.quarkus] (main) infinispan-quarkus-server-runner 11.0.0.Final native (powered by Quarkus 1.5.0.Final) started in 5.618s.
2020-06-16 09:27:45,629 INFO  [io.quarkus] (main) Profile prod activated.
2020-06-16 09:27:45,629 INFO  [io.quarkus] (main) Installed features: [cdi, infinispan-embedded, infinispan-server]

Further Reading

For more detailed information abou how to use the infinispan/server and infinispan/server-native image, please consult the official documentation.

Get it, Use it, Ask us!

The Quarkus extension and the server-native image are currently provided as a tech preview, so please try them out and let us know if you run into any issues.

Please download, report bugs, chat with us, ask questions on StackOverflow.

Posted by Ryan Emerson on 2020-06-16
Tags: docker native quarkus

Monday, 02 December 2019

Infinispan's new image

Infinispan 10 introduced a new server, which does not utilise the same launch commands and configuration as the legacy 9.4 wildfly based server. Therefore, we decided that this was an excellent opportunity to rewrite our container image from scratch to better suite the capabilities of the new server and to provide all the functionality required by the Infinispan Operator.

This post focuses on the server image’s architecture. Future blog posts will focus on more advanced configurations, as well as example usage and deployment scenarios such as deploying a cluster using Kubernetes StatefulSets.

Show me the code!

The source code for the Infinispan image has a new home. The image can be found at https://github.com/infinispan/infinispan-images. Currently this repository only contains the server image, however our intention is for this to also be the home for all future Infinispan related images.

Where’s the Dockerfile?

The most noticable change when looking at the repository is that there is no Dockerfile in the source tree. This is because we decided to utilise the open-source tool CEKit to build our images. CEKit is an image creation tool that allows container images to be created using multiple build engines (e.g. docker, Buildah, Podman) with a single configuration. Installation instructions can be found here, but the basic command to create a Docker based image is as follows.

cekit build docker

CEKit leverages .yaml files for all configuration, opposed to a Dockerfile, as this allows for build time overriding of image properties. For example, with CEKit it’s possible to override the server artifact version without modifying any files, instead the following is passed as a build parameter.

cekit build --overrides '{"artifacts": [{"name": "server.zip", "path": "infinispan-server-10.0.0-SNAPSHOT.zip"}]}' docker

More detailed instructions about how to build the server image from source can be found in the image’s documentation.

Ok so where can I get a pre-built image?

Previously the Infinispan images were deployed exclusively under the jboss namespace at jboss/infinispan-server, however this repository has now been deprecated and will be removed eventually.

Instead, all Infinispan images will now be released under the infinispan namespace and are hosted at both Quay.io and Docker Hub, as quay.io/infinispan/server and infinispan/server.

Getting Started

To get started with infinispan server on your local machine simply execute:

docker run -p 11222:11222 infinispan/server

By default the image has authentication enabled on all exposed endpoints. When executing the above command the image automatically generates a username/password combo, prints the values to stdout and then starts the Infinispan server with the authenticated Hotrod and Rest endpoints exposed on port 11222. Therefore, it’s necessary to utilise the printed credentials when attempting to access the exposed endpoints via clients.

It’s also possible to provide a username/password combination via environment variables like so:

docker run -p 11222:11222 -e USER="Titus Bramble" -e PASS="Shambles" infinispan/server

Connecting via Hotrod

Using the credentials passed in the command above, it is now possible to connect via the HotRod client using the following hotrod-client.properties file. Note, that the following SASL properties must be configured on your client, with the username and password properties changed as required, otherwise the connection will fail:

infinispan.remote.auth-realm=default
infinispan.remote.auth-server-name=infinispan
infinispan.remote.auth-username=Titus Bramble
infinispan.remote.auth-password=Shambles

Connecting via REST

The REST endpoint is configured to utilise the DIGEST protocol, therefore it’s necessary for the HTTP client to authenticate requests. For example, the name of all caches can be retrieved via the following curl command:

 curl --digest -u "Titus Bramble:Shambles" http://localhost:11222/rest/v2/cache

Further Reading

For more detailed information abou how to use the image, please consult the official documentation.

In the next blog post we will look at how the server can be configured for more advanced use-cases by supplying configuration and identity .yaml files.

Get it, Use it, Ask us!

Please download, report bugs, chat with us, ask questions on StackOverflow.

Posted by Ryan Emerson on 2019-12-02
Tags: docker

Tuesday, 21 March 2017

Docker image security changes

image

In the latest 9.0.0.CR3 version, the Infinispan REST endpoint is secured by default, and in order to facilitate remote access, the Docker image has some changes related to the security.

The image now creates a default user login upon start; this user can be changed via environment variables if desired:

You can check if the settings are in place by manipulating data via REST. Trying to do a curl without credentials should lead to a 401 response:

So make sure to always include the credentials from now on when interacting with the Rest endpoint! If using curl, this is the syntax:

And that’s all for this post. To find out more about the Infinispan Docker image, check the documentation, give it a try and let us know if you have any issues or suggestions!

Posted by Gustavo on 2017-03-21
Tags: docker security server rest

Monday, 05 December 2016

Composing the Infinispan Docker image

In the previous post we showed how to manipulate the Infinispan Docker container configuration at both runtime and boot time.

Before diving into multi-host Docker usage, in this post we’ll explore how to create multi-container Docker applications involving Infinispan with the help of Docker Compose.

For this we’ll look at a typical scenario of an Infinispan server backed by an Oracle database as a cache store.

All the code for this sample can be found on github.

 

Infinispan with Oracle JDBC cache store

 

In order to have a cache with persistence with Oracle, we need to do some configuration: configure the driver in the server, create the data source associated with the driver, and configure the cache itself with JDBC persistence.

Let’s take a look at each of those steps:

Obtaining and configuring the driver

The driver (ojdbc6.jar) should be downloaded and placed in the 'driver' folder of the sample project.

The module.xml declaration used to make it available on the server is as follows:

Configuring the Data source

The data source is configured in the "datasource" element of the server configuration file as shown below:

and inside the "datasource/drivers" element, we need to declare the driver:

Creating the cache

The last piece is to define a cache with the proper JDBC Store:

Putting all together

From now on, without using Docker we’d be ready to download and install Oracle following the specific instructions for your OS, then download the Infinispan Server, edit the configuration files, copy over the driver jar, figure out how to launch the database and server, taking care not to have any port conflicts.

If it sounds too much work, it’s because it really is. Wouldn’t it be nice to have all these wired together and launched with a simple command line? Let’s take a look at the Docker way next. 

Enter Docker Compose

Docker Compose is a tool part of the Docker stack to facilitate configuration, execution and management of related Docker containers.

By describing the application aspects in a single yaml file, it allows centralized control of the containers, including custom configuration and parameters, and it also allows runtime interactions with each of the exposed services.

Composing Infinispan

Our Docker Compose file to assemble the application is given below:

It contains two services:

  • one called oracle that uses the wnameless/oracle-xe-11g Docker image, with an environment variable to allow remote connections.

  •  another one called *infinispan* that uses version 8.2.5.Final of the Infinispan Server image. It is launched with a custom command pointing to the changed configuration file and it also mounts two volumes in the container: one for the driver and its module.xml and another for the folder holding our server xml configuration.

Launching

To start the application, just execute

To inspect the status of the containers:

To follow the Infinispan server logs, use:

Infinispan usually starts faster than the database, and since the server waits until the database is ready (more on that later), keep an eye in the log output for "Infinispan Server 8.2.5.Final (WildFly Core 2.0.10.Final) started". After that, both Infinispan and Oracle are properly initialized.

Testing it

Let’s insert a value using the Rest endpoint from Infinispan and verify it was saved to the Oracle database:

To check the Oracle database, we can attach to the container and use Sqlplus:

Other operations

It’s also possible to increase and decrease the number of containers for each of the services:

A thing or two about startup order

 

When dealing with dependent containers in Docker based environments, it’s highly recommended to make the connection obtention between parties robust enough so that the fact that one dependency is not totally initialized doesn’t cause the whole application to fail when starting.

Although Compose does have a depends_on instruction, it simply starts the containers in the declared order but it has no means to detected when a certain container is fully initialized and ready to serve requests before launching a dependent one.

One may be tempted to simply write some glue script to detect if a certain port is open, but that does not work in practice: the network socket may be opened, but the background service could still be in transient initialization state.

The recommended solution for this it to make whoever depends on a service to retry periodically until the dependency is ready. On the Infinispan + Oracle case, we specifically configured the data source with retries to avoid failing at once if the database is not ready:

When starting the application via Compose you’ll notice that Infinispan print some WARN with connection exceptions until Oracle is available: don’t panic, this is expected!

Conclusion

Docker Compose is a powerful and easy to use tool to launch applications involving multiple containers: in this post it allowed to start Infinispan plus Oracle with custom configurations with a single command. It’s also a handy tool to have during development and testing phase of a project, specially when using/evaluating Infinispan with its many possible integrations.

Be sure to check other examples of using Docker Compose involving Infinispan: the Infinispan+Spark Twitter demo, and the Infinispan+Apache Flink demo.

Posted by Gustavo on 2016-12-05
Tags: compose jdbc docker persistence server modules oracle cache store

Friday, 28 October 2016

Infinispan Docker image: custom configuration

In the previous post we introduced the improved Docker image for Infinispan and showed how to run it with different parameters in order to create standalone, clustered and domain mode servers.

This post will show how to address more advanced configuration changes than swapping the JGroups stack, covering cases like creating extra caches or using a pre-existent configuration file.

 

Runtime configuration changes

Since the Infinispan server is based on Wildfly, it also supports the Command Line Interface (CLI) to change configurations at runtime.

Let’s consider an example of a custom indexed cache with Infinispan storage. In order to configure it, we need 4 caches, one cache to hold our data, called testCache and other three caches to hold the indexes:  LuceneIndexesMetadata, LuceneIndexesData and LuceneIndexesLocking.

This is normally achieved by adding this piece of configuration to the server xml:

This is equivalent to the following script:

To apply it to the server, save the script to a file, and run:

where CONTAINER is the id of the running container.

Everything that is applied using the CLI is automatically persisted in the server, and to check what the script produced, use the command to dump the config to a local file called config.xml.

Check the file config.xml: it should contain all four caches created via the CLI.

 

 Using an existent configuration file

Most of the time changing configuration at runtime is sufficient, but it may be desirable to run the server with an existent xml, or change configurations that cannot be applied without a restart. For those cases, the easier option is to mount a volume in the Docker container and start the container with the provided configuration.

This can be achieved with Docker’s volume support. Consider an xml file called customConfig.xml located on a local folder /home/user/config. The following command:

will create a volume inside the container at the /opt/jboss/infinispan-server/standalone/configuration/extra/ directory, with the contents of the local folder /home/user/config.

The container is then launched with the entrypoint extra/customConfig, which means it will use a configuration named customConfig located under the extra folder relative to where the configurations are usually located at /opt/jboss/infinispan-server/standalone/configuration.

 

Conclusion

And that’s all about custom configuration using the Infinispan Docker image.

Stay tuned for the next post where we’ll dive into multi-host clusters with the Infinispan Docker image.

Posted by Gustavo on 2016-10-28
Tags: docker server configuration cli

Friday, 13 March 2015

Infinispan on Openshift v3

Openshift v3 is the open source next generation of Paas, where applications run on Docker containers and are orchestrated/controlled/scheduled by Kubernetes.

In this post I’ll show how to create an Infinispan cluster on Openshift v3 and resize it with a snap of a finger.

Installing Openshift v3

 

Openshift v3 has not been released yet, so I’m going to use the code from origin. There are many ways to install Openshift v3, but for simplicity, I’ll run a full multinode cluster locally on top of VirtualBoxes using the provided Vagrant scripts.

Let’s start by checking out and building the sources:

To boot Openshift, it’s a simple matter of starting up the desired number of nodes:

Grab a beer while the cluster is being provisioned, after a while you should be able to see 3 instances running:

 

Creating the Infinispan template

The following template defines a 2 node Infinispan cluster communicating via TCP, and discovery done using the JGroups gossip router:

There are few different components declared in this template:

  • A service with id jgroups-gossip-service that will expose a JGroups gossip router service on port 11000, around the JGroups Gossip container

  • A ReplicationController with id jgroups-gossip-controller. Replication Controllers are used to ensure that, at any moment, there will be a certain number of replicas of a pod (a group of related docker containers) running. If for some reason a node crashes, the ReplicationController will instantiate a new pod elsewhere, keeping the service endpoint address unchanged.

  • Another ReplicationController with id infinispan-controller. This controller will start 2 replicas of the infinispan-pod. As it happens with the jgroups-pod, the infinispan-pod has only one container defined: the infinispan-server container (based on jboss/infinispan-server) , that is started with the 'clustered.xml' profile and configured with the 'jgroups-gossip-service' address. By defining the gossip router as a service, Openshift guarantees that environment variables such as[.pl-s1]# JGROUPS_GOSSIP_SERVICE_SERVICE_HOST are# available to other pods (consumers).

Applying the template

To apply the template via cmd line:

Grab another beer, it can take a while since in this case the docker images need to be fetched on each of the minions from the public registry. In the meantime, to inspect the pods, along with their containers and statuses:

Resizing the cluster

Changing the number of pods (and thus the number of nodes in the Infinispan cluster) is a simple matter of manipulating the number of replicas in the Replication Controller. To increase the number of nodes to 4:

This should take only a few seconds, since the docker images are already present in all the minions.

And this concludes the post, be sure to check other cool features of Openshift in the project documentation and try out other samples.

Posted by Gustavo on 2015-03-13
Tags: docker openshift kubernetes paas server jgroups vagrant

News

Tags

JUGs alpha as7 asymmetric clusters asynchronous beta c++ cdi chat clustering community conference configuration console data grids data-as-a-service database devoxx distributed executors docker event functional grouping and aggregation hotrod infinispan java 8 jboss cache jcache jclouds jcp jdg jpa judcon kubernetes listeners meetup minor release off-heap openshift performance presentations product protostream radargun radegast recruit release release 8.2 9.0 final release candidate remote query replication queue rest query security spring streams transactions vert.x workshop 8.1.0 API DSL Hibernate-Search Ickle Infinispan Query JP-QL JSON JUGs JavaOne LGPL License NoSQL Open Source Protobuf SCM administration affinity algorithms alpha amazon anchored keys annotations announcement archetype archetypes as5 as7 asl2 asynchronous atomic maps atomic objects availability aws beer benchmark benchmarks berkeleydb beta beta release blogger book breizh camp buddy replication bugfix c# c++ c3p0 cache benchmark framework cache store cache stores cachestore cassandra cdi cep certification cli cloud storage clustered cache configuration clustered counters clustered locks codemotion codename colocation command line interface community comparison compose concurrency conference conferences configuration console counter cpp-client cpu creative cross site replication csharp custom commands daas data container data entry data grids data structures data-as-a-service deadlock detection demo deployment dev-preview development devnation devoxx distributed executors distributed queries distribution docker documentation domain mode dotnet-client dzone refcard ec2 ehcache embedded embedded query equivalence event eviction example externalizers failover faq final fine grained flags flink full-text functional future garbage collection geecon getAll gigaspaces git github gke google graalvm greach conf gsoc hackergarten hadoop hbase health hibernate hibernate ogm hibernate search hot rod hotrod hql http/2 ide index indexing india infinispan infinispan 8 infoq internationalization interoperability interview introduction iteration javascript jboss as 5 jboss asylum jboss cache jbossworld jbug jcache jclouds jcp jdbc jdg jgroups jopr jpa js-client jsr 107 jsr 347 jta judcon kafka kubernetes lambda language learning leveldb license listeners loader local mode lock striping locking logging lucene mac management map reduce marshalling maven memcached memory migration minikube minishift minor release modules mongodb monitoring multi-tenancy nashorn native near caching netty node.js nodejs non-blocking nosqlunit off-heap openshift operator oracle osgi overhead paas paid support partition handling partitioning performance persistence podcast presentation presentations protostream public speaking push api putAll python quarkus query quick start radargun radegast react reactive red hat redis rehashing releaase release release candidate remote remote events remote query replication rest rest query roadmap rocksdb ruby s3 scattered cache scripting second level cache provider security segmented server shell site snowcamp spark split brain spring spring boot spring-session stable standards state transfer statistics storage store store by reference store by value streams substratevm synchronization syntax highlighting tdc testing tomcat transactions tutorial uneven load user groups user guide vagrant versioning vert.x video videos virtual nodes vote voxxed voxxed days milano wallpaper websocket websockets wildfly workshop xsd xsite yarn zulip

back to top