Tuesday, 21 March 2017
In the latest 9.0.0.CR3 version, the Infinispan REST endpoint is secured by default, and in order to facilitate remote access, the Docker image has some changes related to the security.
The image now creates a default user login upon start; this user can be changed via environment variables if desired:
You can check if the settings are in place by manipulating data via REST. Trying to do a curl without credentials should lead to a 401 response:
So make sure to always include the credentials from now on when interacting with the Rest endpoint! If using curl, this is the syntax:
And that’s all for this post. To find out more about the Infinispan Docker image, check the documentation, give it a try and let us know if you have any issues or suggestions!
Tags: docker security server rest
Monday, 05 December 2016
In the previous post we showed how to manipulate the Infinispan Docker container configuration at both runtime and boot time.
Before diving into multi-host Docker usage, in this post we’ll explore how to create multi-container Docker applications involving Infinispan with the help of Docker Compose.
For this we’ll look at a typical scenario of an Infinispan server backed by an Oracle database as a cache store.
All the code for this sample can be found on github.
In order to have a cache with persistence with Oracle, we need to do some configuration: configure the driver in the server, create the data source associated with the driver, and configure the cache itself with JDBC persistence.
Let’s take a look at each of those steps:
The driver (ojdbc6.jar) should be downloaded and placed in the 'driver' folder of the sample project.
The module.xml declaration used to make it available on the server is as follows:
The data source is configured in the "datasource" element of the server configuration file as shown below:
and inside the "datasource/drivers" element, we need to declare the driver:
From now on, without using Docker we’d be ready to download and install Oracle following the specific instructions for your OS, then download the Infinispan Server, edit the configuration files, copy over the driver jar, figure out how to launch the database and server, taking care not to have any port conflicts.
If it sounds too much work, it’s because it really is. Wouldn’t it be nice to have all these wired together and launched with a simple command line? Let’s take a look at the Docker way next.
Docker Compose is a tool part of the Docker stack to facilitate configuration, execution and management of related Docker containers.
By describing the application aspects in a single yaml file, it allows centralized control of the containers, including custom configuration and parameters, and it also allows runtime interactions with each of the exposed services.
Our Docker Compose file to assemble the application is given below:
It contains two services:
one called oracle that uses the wnameless/oracle-xe-11g Docker image, with an environment variable to allow remote connections.
another one called *infinispan* that uses version 8.2.5.Final of the Infinispan Server image. It is launched with a custom command pointing to the changed configuration file and it also mounts two volumes in the container: one for the driver and its module.xml and another for the folder holding our server xml configuration.
To start the application, just execute
To inspect the status of the containers:
To follow the Infinispan server logs, use:
Infinispan usually starts faster than the database, and since the server waits until the database is ready (more on that later), keep an eye in the log output for "Infinispan Server 8.2.5.Final (WildFly Core 2.0.10.Final) started". After that, both Infinispan and Oracle are properly initialized.
Let’s insert a value using the Rest endpoint from Infinispan and verify it was saved to the Oracle database:
To check the Oracle database, we can attach to the container and use Sqlplus:
When dealing with dependent containers in Docker based environments, it’s highly recommended to make the connection obtention between parties robust enough so that the fact that one dependency is not totally initialized doesn’t cause the whole application to fail when starting.
Although Compose does have a depends_on instruction, it simply starts the containers in the declared order but it has no means to detected when a certain container is fully initialized and ready to serve requests before launching a dependent one.
One may be tempted to simply write some glue script to detect if a certain port is open, but that does not work in practice: the network socket may be opened, but the background service could still be in transient initialization state.
The recommended solution for this it to make whoever depends on a service to retry periodically until the dependency is ready. On the Infinispan + Oracle case, we specifically configured the data source with retries to avoid failing at once if the database is not ready:
When starting the application via Compose you’ll notice that Infinispan print some WARN with connection exceptions until Oracle is available: don’t panic, this is expected!
Docker Compose is a powerful and easy to use tool to launch applications involving multiple containers: in this post it allowed to start Infinispan plus Oracle with custom configurations with a single command. It’s also a handy tool to have during development and testing phase of a project, specially when using/evaluating Infinispan with its many possible integrations.
Tags: compose jdbc docker persistence server modules oracle cache store
Friday, 28 October 2016
In the previous post we introduced the improved Docker image for Infinispan and showed how to run it with different parameters in order to create standalone, clustered and domain mode servers.
This post will show how to address more advanced configuration changes than swapping the JGroups stack, covering cases like creating extra caches or using a pre-existent configuration file.
Since the Infinispan server is based on Wildfly, it also supports the Command Line Interface (CLI) to change configurations at runtime.
Let’s consider an example of a custom indexed cache with Infinispan storage. In order to configure it, we need 4 caches, one cache to hold our data, called testCache and other three caches to hold the indexes: LuceneIndexesMetadata, LuceneIndexesData and LuceneIndexesLocking.
This is normally achieved by adding this piece of configuration to the server xml:
This is equivalent to the following script:
To apply it to the server, save the script to a file, and run:
where CONTAINER is the id of the running container.
Everything that is applied using the CLI is automatically persisted in the server, and to check what the script produced, use the command to dump the config to a local file called config.xml.
Check the file config.xml: it should contain all four caches created via the CLI.
Most of the time changing configuration at runtime is sufficient, but it may be desirable to run the server with an existent xml, or change configurations that cannot be applied without a restart. For those cases, the easier option is to mount a volume in the Docker container and start the container with the provided configuration.
This can be achieved with Docker’s volume support. Consider an xml file called customConfig.xml located on a local folder /home/user/config. The following command:
will create a volume inside the container at the /opt/jboss/infinispan-server/standalone/configuration/extra/ directory, with the contents of the local folder /home/user/config.
The container is then launched with the entrypoint extra/customConfig, which means it will use a configuration named customConfig located under the extra folder relative to where the configurations are usually located at /opt/jboss/infinispan-server/standalone/configuration.
Tags: docker server configuration cli
Wednesday, 20 July 2016
The Infinispan Docker image has been improved, making it easier to run Infinispan Servers in clustered, domain and standalone modes, with different protocol stacks.
In this blog post we’ll show a few usage scenarios and how to combine it with the jgroups-gossip image to create Infinispan Server clusters in docker based environments.
==== Getting started
By default the container runs in clustered mode, and to start a node simply execute:
Bringing a second container will cause it to form a cluster.The membership can be verified by running a command directly in the newly launched container:
==== Using a different JGroups stack
The command above creates a cluster with the default JGroups stack (UDP), but it’s possible to pick another one provided it’s supported by the server. For example, to use TCP:
==== Running on cloud environments
We recently dockerized the JGroups Gossip Router to be used as an alternative discovery mechanism in environments where multicast is not enabled, such as cloud environments.
Employing a gossip router will enable discovery via TCP, where the router acts as a registry: each member will register itself in this registry upon start and also discover other members.
The gossip router container can be launched by:
Take note of the address where the router will bind to, it’s needed by the Infinispan nodes. The address can be easily obtained by:
Finally we can now launch our cluster specifying the tcp-gossip stack with the location of the gossip router:
==== Launching Standalone mode
Passing an extra parameter allows to run a server in standalone (non-clustered) mode:
==== Server Management Console in Domain mode
Domain mode is a special case of clustered mode (and currently a requirement to use the Server Management Console), that involves launching a domain controller process plus one or more host controller processes. The domain controller does not hold data, it is used as a centralized management process that can replicate configuration and provision servers on the host controllers.
Running a domain controller is easily achievable with a parameter:
Once the domain controller is running, it’s possible to start one or more host controllers. In the default configuration, each host controller has two Infinispan server instances:
The command line interface can be used to verify the hosts managed in the domain:
It should output all the host names that are part of the domain, including the master (domain controller):
To get access to the Management console, use credentials admin/admin and go to port 9990 of the domain controller, for example: http://172.17.0.2:9990/
The image is built on Dockerhub shortly after each Infinispan release (stable and unstable), and the improvements presented in this post are available for Infinispan 9.0.0.Alpha3 and Infinispan 8.2.3.Final. As a reminder, make sure to pick the right version when launching containers:
The image was created to be flexible and easy to use, but if something is not working for you or if you have any suggestions to improve it, please report it at https://github.com/jboss-dockerfiles/infinispan/issues/
Tags: docker console domain mode server jgroups
Wednesday, 09 December 2015
The connector allows the Infinispan Server to become a data source for Apache Spark, for both batch jobs and stream processing, including read and write.
In this release, the highlight is the addition of two new operators to the RDD that support filtering using native capabilities of Infinispan. The first one is filterByQuery:
The second operator was introduced to replace the previous configuration based filter factory name, and was extended to support arbitrary parameters:
The connector has also been updated to be compatible with Spark 1.5.2 and Infinispan 8.1.0.Final.
For more details including full list of changes and download info please visit the Connectors Download section. The project Github contains up-to-date info on how to get started with the connector, also make sure to try the included docker based demo. To report any issue or to request new features, use the new dedicated issue tracker. We’d love to get your feedback!
Tags: spark server
Friday, 16 October 2015
One of the questions we get asked a lot is: when will I be able to run Map/Reduce and DistExec jobs over HotRod.
I’m happy to say: now !
Here’s an example of a very simple script:
The mode property instructs the execution engine where we want to run the script: local for running the script on the node that is handling the request and distributed for running the script wrapped by a distributed executor. Bear in mind that you can certainly use clustered operations in local mode.
Scripts can also take named parameters which will "appear" as bindings in the execution scope.
Invoking it from a Java HotRod client would look like this:
Server-side scripts will be evolving quite a bit in Infinispan 8.1 where we will add support for the broader concept of server-side tasks which will include both scripts and deployable code which can be invoked in the same way, all managed and configured by the upcoming changes in the Infinispan Server console.
Monday, 21 September 2015
The version 0.1 of the Infinispan Hadoop connector has just been made available!
The connector will host several integrations with Hadoop related projects, and in this first release it supports converting Infinispan server into a Hadoop compliant data source, by providing an implementation of InputFormat and OutputFormat.
A Hadoop InputFormat is a specification of how a certain data source can be partitioned and how to read data from each of the partitions. Conversely, OutputFormat is used to write.
Looking closely at the Hadoop’s InputFormat interface, we can see two methods:
List<InputSplit> getSplits(JobContext context); RecordReader<K,V> createRecordReader(InputSplit split,TaskAttemptContext context);
The first method defines essentially a data partitioner, calculating one or more InputSplits that contain information about a certain partition of the data. With possession of a InputSplit, one can use it to obtain a RecordReader to iterate over the data. These two operations allow for parallelization of data processing across multiple nodes, and that’s how Hadoop map reduce achieves a high throughput over large datasets.
In Infinispan terms, each partition is a set of segments on a certain server, and a record reader is a remote iterator over those segments. The default partitioner shipped with the connector will create as many partitions as servers in the cluster, and each partition will contain the segments that are associated with that specific server.
==== Not only map reduce
Although the InfinispanInputFormat and InfinispanOutputformat can be used to run traditional Hadoop map reduce jobs over Infinispan data, it is not coupled to the Hadoop map reduce runtime. It is possible to leverage the connector to integrate Infinispan with other tools that, besides supporting Hadoop I/O interfaces, are able to read and write data more efficiently. One of those tools is Apache Flink, that has a dataflow engine capable of doing batch and stream data processing that supersedes the classic two stage map reduce approach.
==== Apache Flink example
Apache Flink supports Hadoop’s InputFormat as a data source to execute batch jobs, so to integrate with Infinispan it’s straightforward:
Please refer to the complete sample that has docker images for both Apache Flink and Infinispan server, and detailed instructions on how to execute and customise job.
More details about the connector, maven coordinates, configuration options, sources and samples can be found at the project repository
Tags: yarn hadoop server flink
Friday, 13 March 2015
Openshift v3 has not been released yet, so I’m going to use the code from origin. There are many ways to install Openshift v3, but for simplicity, I’ll run a full multinode cluster locally on top of VirtualBoxes using the provided Vagrant scripts.
Let’s start by checking out and building the sources:
To boot Openshift, it’s a simple matter of starting up the desired number of nodes:
Grab a beer while the cluster is being provisioned, after a while you should be able to see 3 instances running:
The following template defines a 2 node Infinispan cluster communicating via TCP, and discovery done using the JGroups gossip router:
There are few different components declared in this template:
A service with id jgroups-gossip-service that will expose a JGroups gossip router service on port 11000, around the JGroups Gossip container
A ReplicationController with id jgroups-gossip-controller. Replication Controllers are used to ensure that, at any moment, there will be a certain number of replicas of a pod (a group of related docker containers) running. If for some reason a node crashes, the ReplicationController will instantiate a new pod elsewhere, keeping the service endpoint address unchanged.
Another ReplicationController with id infinispan-controller. This controller will start 2 replicas of the infinispan-pod. As it happens with the jgroups-pod, the infinispan-pod has only one container defined: the infinispan-server container (based on jboss/infinispan-server) , that is started with the 'clustered.xml' profile and configured with the 'jgroups-gossip-service' address. By defining the gossip router as a service, Openshift guarantees that environment variables such as[.pl-s1]# JGROUPS_GOSSIP_SERVICE_SERVICE_HOST are# available to other pods (consumers).
To apply the template via cmd line:
Grab another beer, it can take a while since in this case the docker images need to be fetched on each of the minions from the public registry. In the meantime, to inspect the pods, along with their containers and statuses:
Changing the number of pods (and thus the number of nodes in the Infinispan cluster) is a simple matter of manipulating the number of replicas in the Replication Controller. To increase the number of nodes to 4:
This should take only a few seconds, since the docker images are already present in all the minions.
Tags: docker openshift kubernetes paas server jgroups vagrant
Wednesday, 28 April 2010
The HTML 5 WebSocket Interface seems like a nice way of exposing an Infinispan Cache to web clients that are WebSocket enabled.
I just committed a first cut of the new Infinispan WebSocket Server to Subversion.
put/get/remove operations on your Infinispan Cache.
notify/unnotify mechanism through which your web page can manage Cache entry update notifications, pushed to the browser.
Take a look at:
Tags: websockets server
Tuesday, 06 April 2010
We’ve just released Infinispan 4.1.0.Alpha2 with even more new functionality for the community to play with. Over the past few weeks we’ve been going backwards and forwards in the Infinispan development list discussing Infinispan’s binary client server protocol called Hot Rod and in 4.1.0.Alpha2 we’re proud to present the first versions of the Hot Rod server and java client implementations. Please visit this wiki to find out how to use Hot Rod’s java client client and server. Please note that certain functionality such as clients receiving topology and hashing information has not yet been implemented.
Besides, Infinispan 4.1.0.Alpha2 is the first release to feature the new LIRS eviction policy and the new eviction design that batches updates, which in combination should provide users with more efficient and accurate eviction functionality.
Another cool feature added in this release is GridFileSystem: a new, experimental API that exposes an Infinispan-backed data grid as a file system. Specifically, the API works as an extension to the JDK’s File, InputStream and OutputStream classes. You can read more on GridFileSystem here.
Finally, you can find the API docs for 4.1.0.Alpha2 here and again, please consider this an unstable release that is meant to gather feedback on the Hot Rod client/server modules and the new eviction design.
Cheers, Galder & Mircea
Tags: hotrod server