Wednesday, 22 July 2020

Anchored keys - scaling up a cluster without transferring values

Background

For background, the preferred way to scale up the storage capacity of a Infinispan cluster is to use distributed caches. A distributed cache stores each key/value pair on num-owners nodes, and each node can compute the location of a key (aka the key owners) directly.

Infinispan achieves this by statically mapping cache keys to num-segments consistent hash segments, and then dynamically mapping segments to nodes based on the cache’s topology (roughly the current plus the historical membership of the cache). Whenever a new node joins the cluster, the cache is rebalanced, and the new node replaces an existing node as the owner of some segments. The key/value pairs in those segments are copied to the new node and removed from the no-longer-owner node via state transfer.

Because the allocation of segments to nodes is based on random UUIDs generated at start time, it is common (though less so after ISPN-11679 ), for segments to also move from one old node to another old node.

Architecture

The basic idea is to skip the static mapping of keys to segments and to map keys directly to nodes.

When a key/value pair is inserted into the cache, the newest member becomes the anchor owner of that key, and the only node storing the actual value. In order to make the anchor location available without an extra remote lookup, all the other nodes store a reference to the anchor owner.

That way, when another node joins, it only needs to receive the location information from the existing nodes, and values can stay on the anchor owner, minimizing the amount of traffic.

Limitations

Only one node can be added at a time

An external actor (e.g. a Kubernetes/OpenShift operator, or a human administrator) must monitor the load on the current nodes, and add a new node whenever the newest node is close to "full".

Because the anchor owner information is replicated on all the nodes, and values are never moved off a node, the memory usage of each node will keep growing as new entries and nodes are added.
There is no redundancy

Every value is stored on a single node. When a node crashes or even stops gracefully, the values stored on that node are lost.

Transactions are not supported

A later version may add transaction support, but the fact that any node stop or crash loses entries makes transactions a lot less valuable compared to a distributed cache.

Hot Rod clients do not know the anchor owner

Hot Rod clients cannot use the topology information from the servers to locate the anchor owner. Instead, the server receiving a Hot Rod get request must make an additional request to the anchor owner in order to retrieve the value.

Configuration

The module is still very young and does not yet support many Infinispan features.

Eventually, if it proves useful, it may become another cache mode, just like scattered caches. For now, configuring a cache with anchored keys requires a replicated cache with a custom element anchored-keys:

<?xml version="1.0" encoding="UTF-8"?>
<infinispan
      xmlns="urn:infinispan:config:11.0"
      xmlns:anchored="urn:infinispan:config:anchored:11.0"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="urn:infinispan:config:11.0
            https://infinispan.org/schemas/infinispan-config-11.0.xsd
            urn:infinispan:config:anchored:11.0
            https://infinispan.org/schemas/infinispan-anchored-config-11.0.xsd">

    <cache-container default-cache="default">
        <transport/>
        <replicated-cache name="default">
            <anchored:anchored-keys/>
        </replicated-cache>
    </cache-container>

</infinispan>

When the <anchored-keys/> element is present, the module automatically enables anchored keys and makes some required configuration changes:

  • Disables await-initial-transfer

  • Enables conflict resolution with the equivalent of

    <partition-handling when-split="ALLOW_READ_WRITES" merge-policy="PREFER_NON_NULL"/>

The cache will fail to start if these attributes are explicitly set to other values, if state transfer is disabled, or if transactions are enabled.

Implementation status

Basic operations are implemented: put, putIfAbsent, get, replace, remove, putAll, getAll.

Functional commands

The FunctionalMap API is not implemented.

Other operations that rely on the functional API’s implementation do not work either: merge, compute, computeIfPresent, computeIfAbsent.

Partition handling

When a node crashes, surviving nodes do not remove anchor references pointing to that node. In theory, this could allow merges to skip conflict resolution, but currently the PREFERRED_NON_NULL merge policy is configured automatically and cannot be changed.

Listeners

Cluster listeners and client listeners are implemented and receive the correct notifications.

Non-clustered embedded listeners currently receive notifications on all the nodes, not just the node where the value is stored.

Performance considerations

Client/Server Latency

The client always contacts the primary owner, so any read has a (N-1)/N probability of requiring a unicast RPC from the primary to the anchor owner.

Writes require the primary to send the value to one node and the anchor address to all the other nodes, which is currently done with N-1 unicast RPCs.

In theory we could send in parallel one unicast RPC for the value and one multicast RPC for the address, but that would need additional logic to ignore the address on the anchor owner and with TCP multicast RPCs are implemented as parallel unicasts anyway.

Memory overhead

Compared to a distributed cache with one owner, an anchored-keys cache contains copies of all the keys and their locations, plus the overhead of the cache itself.

Therefore, a node with anchored-keys caches should stop accepting new entries when it has less than (<key size> + <per-key overhead>) * <number of entries not yet inserted> bytes available.

The number of entries not yet inserted is obviously very hard to estimate. In the future we may provide a way to limit the overhead of key location information, e.g. by using a distributed cache.

The per-key overhead is lowest for off-heap storage, around 63 bytes: 8 bytes for the entry reference in MemoryAddressHash.memory, 29 bytes for the off-heap entry header, and 26 bytes for the serialized RemoteMetadata with the owner’s address.

The per-key overhead of the ConcurrentHashMap-based on-heap cache, assuming a 64-bit JVM with compressed OOPS, would be around 92 bytes: 32 bytes for ConcurrentHashMap.Node, 32 bytes for MetadataImmortalCacheEntry, 24 bytes for RemoteMetadata, and 4 bytes in the ConcurrentHashMap.table array.

State transfer

State transfer does not transfer the actual values, but it still needs to transfer all the keys and the anchor owner information.

Assuming that the values are much bigger compared to the keys, the anchor cache’s state transfer should also be much faster compared to the state transfer of a distributed cache of a similar size. But for small values, there may not be a visible improvement.

The initial state transfer does not block a joiner from starting, because it will just ask another node for the anchor owner. However, the remote lookups can be expensive, especially in embedded mode, but also in server mode, if the client is not HASH_DISTRIBUTION_AWARE.

Posted by Dan Berindei on 2020-07-22
Tags: anchored keys state transfer

Friday, 03 July 2020

Infinispan 11.0.1.Final

Dear Infinispan community,

we hope you’ve been enjoying all the new goodies included in our latest major release, Infinispan 11. To show that we care about you, we have a brand new micro release for you which addresses a number of issues.

In particular, if you are using HTTP/2 with TLS/SSL, JCache with persistence, Spring Boot or RocksDB, we have fixes for you.

Additionally, the Infinispan Archetypes have been resurrected and are now being maintained as part of the main repository to ensure they won’t fall out of sync anymore. Read more about how to get started with a Maven archetype.

The following list shows what we have fixed:

Component Upgrade

https://issues.redhat.com/browse/ISPN-11843[ISPN-11843] - Upgrade SB starter to 2.3
https://issues.redhat.com/browse/ISPN-12009[ISPN-12009] - Upgrade Hibernate to latest micro
https://issues.redhat.com/browse/ISPN-12013[ISPN-12013] - Upgrade H2 database engine to 1.4.200
https://issues.redhat.com/browse/ISPN-12014[ISPN-12014] - Upgrade mojo-executor

Enhancement

https://issues.redhat.com/browse/ISPN-11151[ISPN-11151] - Migrating some remote tests from jdg-functional-tests to upstream
https://issues.redhat.com/browse/ISPN-11549[ISPN-11549] - Move Infinispan SB starter simple tutorials to simple tutorials repository
https://issues.redhat.com/browse/ISPN-11782[ISPN-11782] - Docs: Cross-Site monitoring
https://issues.redhat.com/browse/ISPN-11828[ISPN-11828] - Docs: Add stable docs to infinispan.org/documentation
https://issues.redhat.com/browse/ISPN-11913[ISPN-11913] - Docs: Add search and improve index pages
https://issues.redhat.com/browse/ISPN-11996[ISPN-11996] - Allow customize memory and memory swap for Testcontainers images
https://issues.redhat.com/browse/ISPN-12001[ISPN-12001] - Add jboss-parent to upstream projects
https://issues.redhat.com/browse/ISPN-12006[ISPN-12006] - Test upload schema with CLI
https://issues.redhat.com/browse/ISPN-12007[ISPN-12007] - Elytron 1.12.1.Final
https://issues.redhat.com/browse/ISPN-12010[ISPN-12010] - Remove Apache Commons Codec
https://issues.redhat.com/browse/ISPN-12012[ISPN-12012] - Force the same Guava version in all transitive dependencies
https://issues.redhat.com/browse/ISPN-12021[ISPN-12021] - Docs: Creating Caches Remotely
https://issues.redhat.com/browse/ISPN-12039[ISPN-12039] - Docs: Hot Rod Per-Cache Simple Tutorial
https://issues.redhat.com/browse/ISPN-12045[ISPN-12045] - Clarify jboss-marshalling deprecation message
https://issues.redhat.com/browse/ISPN-12047[ISPN-12047] - Merge Async and Sync Cross-Site attributes
https://issues.redhat.com/browse/ISPN-12053[ISPN-12053] - Remove jetty-client from the REST testsuite
https://issues.redhat.com/browse/ISPN-12059[ISPN-12059] - CliIT allow external module use
https://issues.redhat.com/browse/ISPN-12065[ISPN-12065] - Add the anchored-keys module to the server
https://issues.redhat.com/browse/ISPN-12068[ISPN-12068] - HTTP/2 pipeline missing chunked handler

Bug

https://issues.redhat.com/browse/ISPN-11998[ISPN-11998] - Eviction new and legacy attributes should stay in sync
https://issues.redhat.com/browse/ISPN-12017[ISPN-12017] - Explicitly disable the java8-test execution defined in the jboss-parent POM
https://issues.redhat.com/browse/ISPN-12018[ISPN-12018] - Fix JpaStoreCompatibilityTest failure
https://issues.redhat.com/browse/ISPN-12019[ISPN-12019] - Always attempt to initialize openssl
https://issues.redhat.com/browse/ISPN-12026[ISPN-12026] - Fetch the correct IP:port when NodePort is used
https://issues.redhat.com/browse/ISPN-12027[ISPN-12027] - RemoteCacheContainer missing getCache overrides
https://issues.redhat.com/browse/ISPN-12030[ISPN-12030] - BlockHound is not active on JDK 13/14
https://issues.redhat.com/browse/ISPN-12032[ISPN-12032] - JCache cache loader should not require marshalling
https://issues.redhat.com/browse/ISPN-12038[ISPN-12038] - RocksDB compression options incomplete and incorrectly applied
https://issues.redhat.com/browse/ISPN-12043[ISPN-12043] - Shared stores should not have (add|remove)Segments methods invoked
https://issues.redhat.com/browse/ISPN-12046[ISPN-12046] - Out of the box server testing is broken
https://issues.redhat.com/browse/ISPN-12056[ISPN-12056] - Some tests are failing on windows when they try to delete the SingleFileStore
https://issues.redhat.com/browse/ISPN-12058[ISPN-12058] - wildfly/feature-pack module doesn't build with profile java8-test
https://issues.redhat.com/browse/ISPN-12060[ISPN-12060] - WildFly modules integration tests do not work on WildFly 19
https://issues.redhat.com/browse/ISPN-12064[ISPN-12064] - REST server returns 403 (forbidden) for same origin request
https://issues.redhat.com/browse/ISPN-12067[ISPN-12067] - HTTP/2 framing error for invalid requests
https://issues.redhat.com/browse/ISPN-12069[ISPN-12069] - Unable to override the marshaller in SB starter

Sub-task

https://issues.redhat.com/browse/ISPN-11953[ISPN-11953] - Create client archetype
https://issues.redhat.com/browse/ISPN-11954[ISPN-11954] - Move archetypes to Infinispan repository
https://issues.redhat.com/browse/ISPN-11955[ISPN-11955] - Remove testcase-archetype
https://issues.redhat.com/browse/ISPN-11956[ISPN-11956] - Rework store-archetype to use the new NonBlockingStore SPI
https://issues.redhat.com/browse/ISPN-11957[ISPN-11957] - Upgrade embedded archetype to 11.0
https://issues.redhat.com/browse/ISPN-11958[ISPN-11958] - Document Archetypes

Get it, Use it, Ask us!

Please download, report bugs, chat with us, ask questions on StackOverflow.

Posted by Tristan Tarrant on 2020-07-03
Tags: release

Tuesday, 16 June 2020

Infinispan Native Server Image

Starting with Infinispan 11, it’s now possible to create a natively compiled version of the Infinispan server.

TL;DR

We have a new image that contains a natively compiled Infinispan server and has a footprint of only 286MB. Try it now:

docker run -p 11222:11222 quay.io/infinispan/server-native:11.0

Infinispan Quarkus Extensions

Quarkus provides built in support for generating native executables, providing several abstractions to improve the development experience of creating native binaries. Building upon the new server, the Infinispan team have created a Quarkus extension for both embedded and server use-cases. These extensions allow a native binary version of the server to be compiled and ran by simply executing:

mvn clean install -Dnative
./server-runner/target/infinispan-quarkus-server-runner-11.0.0.Final-runner
    -Dquarkus.infinispan-server.config-file=infinispan.xml \
    -Dquarkus.infinispan-server.config-path=server/conf \
    -Dquarkus.infinispan-server.data-path=data \
    -Dquarkus.infinispan-server.server-path=/opt/infinispan &

Native Server Image

For many developers compiling your own Infinispan native binary manually is not desirable, therefore we provide the infinispan/server-native image that uses a native server binary. The advantage of this over our JVM based infinispan/server image is that we can no provide a much smaller image, 286 vs 468 MB, as we no longer need to include an openjdk JVM in the image.

The server-native image is configured exactly the same as the JVM based infinispan/server image. We can run an authenticated Infinispan server with a single user with the following command:

docker run -p 11222:11222 -e USER="user" -e PASS="pass" quay.io/infinispan/server-native:11.0

From the output below, you can see the Quarkus banner as well various io.quarkus logs indicating which extensions are being used.

################################################################################
#                                                                              #
# IDENTITIES_PATH not specified                                                #
# Generating Identities yaml using USER and PASS env vars.                     #
################################################################################
2020-06-16 09:27:39,638 INFO  [io.quarkus] (main) config-generator 2.0.0.Final native (powered by Quarkus 1.5.0.Final) started in 0.069s.
2020-06-16 09:27:39,643 INFO  [io.quarkus] (main) Profile prod activated.
2020-06-16 09:27:39,643 INFO  [io.quarkus] (main) Installed features: [cdi, qute]
2020-06-16 09:27:39,671 INFO  [io.quarkus] (main) config-generator stopped in 0.001s
2020-06-16 09:27:40,306 INFO  [ListenerBean] (main) The application is starting...
2020-06-16 09:27:40,481 INFO  [org.inf.CONTAINER] (main) ISPN000128: Infinispan version: Infinispan 'Corona Extra' 11.0.0.Final
2020-06-16 09:27:40,489 INFO  [org.inf.CLUSTER] (main) ISPN000078: Starting JGroups channel infinispan with stack image-tcp
2020-06-16 09:27:45,560 INFO  [org.inf.CLUSTER] (main) ISPN000094: Received new cluster view for channel infinispan: [82914efa63fe-12913|0] (1) [82914efa63fe-12913]
2020-06-16 09:27:45,562 INFO  [org.inf.CLUSTER] (main) ISPN000079: Channel infinispan local address is 82914efa63fe-12913, physical addresses are [10.0.2.100:7800]
2020-06-16 09:27:45,566 INFO  [org.inf.CONTAINER] (main) ISPN000390: Persisted state, version=11.0.0.Final timestamp=2020-06-16T09:27:45.563303Z
2020-06-16 09:27:45,584 INFO  [org.inf.CONTAINER] (main) ISPN000104: Using EmbeddedTransactionManager
2020-06-16 09:27:45,617 INFO  [org.inf.SERVER] (ForkJoinPool.commonPool-worker-3) ISPN080018: Protocol HotRod (internal)
2020-06-16 09:27:45,618 INFO  [org.inf.SERVER] (main) ISPN080018: Protocol REST (internal)
2020-06-16 09:27:45,629 INFO  [org.inf.SERVER] (main) ISPN080004: Protocol SINGLE_PORT listening on 10.0.2.100:11222
2020-06-16 09:27:45,629 INFO  [org.inf.SERVER] (main) ISPN080034: Server '82914efa63fe-12913' listening on http://10.0.2.100:11222
2020-06-16 09:27:45,629 INFO  [org.inf.SERVER] (main) ISPN080001: Infinispan Server 11.0.0.Final started in 5457ms
2020-06-16 09:27:45,629 INFO  [io.quarkus] (main) infinispan-quarkus-server-runner 11.0.0.Final native (powered by Quarkus 1.5.0.Final) started in 5.618s.
2020-06-16 09:27:45,629 INFO  [io.quarkus] (main) Profile prod activated.
2020-06-16 09:27:45,629 INFO  [io.quarkus] (main) Installed features: [cdi, infinispan-embedded, infinispan-server]

Further Reading

For more detailed information abou how to use the infinispan/server and infinispan/server-native image, please consult the official documentation.

Get it, Use it, Ask us!

The Quarkus extension and the server-native image are currently provided as a tech preview, so please try them out and let us know if you run into any issues.

Please download, report bugs, chat with us, ask questions on StackOverflow.

Posted by Ryan Emerson on 2020-06-16
Tags: docker native quarkus

Monday, 15 June 2020

Infinispan 11.0.0.Final

Dear Infinispan community,

We’re proud to announce the release of Infinispan 11. In the tradition of assigning beer codenames to our releases, we decided that "Corona Extra" would be a significant representation of the period during which most of the development has happened. We hope that you, your families and friends have not been impacted by the pandemic.

But didn’t you release 10.x not long ago ?

Indeed, but version numbers are just that: numbers. We are still continuing our near-quarterly releases, but, from now on, these will be identified by major version numbers.

So, what’s new in Infinispan 11 ?

As usual we added new features, improved existing ones and prepared the groundwork for upcoming features.

Conflict detection and resolution for Asynchronous Cross-Site Replication

Cross-site replication is one of our most used features, as it enables a number of very useful use-cases such as geographical load distribution, zero-downtime disaster recovery and follow-the-sun data centers.

In this release we completely overhauled the way we implement asynchronous cross-site replication by introducing conflict resolution, based on vector clocks, as well as multiple site masters to increase throughput and reliability. This means that you can have multiple active sites safely replicating data between each other.

Server security overhaul

Infinispan Server’s security, while very powerful, was also tricky to set up because of the configuration complexity. Since we wanted to make the server secure by default, we put a lot of work in simplifying the configuration and removing all of the boilerplate. Additionally, if you are securing the server with Keycloak, accessing the console will correctly obtain credentials through the realm login page.

Non-blocking internals

Our quest to make better use of the available hardware resources in all deployment models (bare-metal, containerized, virtualized) continues as we’ve now consolidated a lot of thread-pools into just two: non-blocking and blocking. Most of the code now makes use of the non-blocking pool. Paths which may block, such as certain persistent stores, use the blocking pool so that they don’t hold up work that may be processed without blocking. This release also includes a new non-blocking Store SPI, so that you can take advantage of stores with real non-blocking I/O.

Clustering

As Infinispan is participating in CloudButton, a Serverless Data Analytics Platform which is part of the European Union’s Horizon 2020 research and innovation programme, we have introduced a new optional feature which allows scaling by adding new nodes to a cluster without state-transfer. This means that you can add capacity with zero-impact to your operations. Obviously this comes at the cost of reduced resilience in case of failures, but, for scenarios where high availability is not required, this gives you a highly scalable in-memory storage solution.

If high availability is your thing, the rebalancing algorithm which decides how segments (our subdivision of the data space) are mapped to nodes has been overhauled to be much more accurate and fairer.

Query/Indexing

Querying and indexing will be the major focus in Infinispan 12 (with the long awaited upgrade to Hibernate Search 6 and Lucene 8). In preparation for that, a lot of work has gone into deprecations, usability, clean ups and documentation.

Hot Rod Client improvements

Many usability changes have been added to our Java Hot Rod client:

  • a Hot Rod URI as a compact way to configure a connection

  • automatic creation of caches on demand using supplied configurations/templates with support for wildcards

  • improved iteration of entries by concurrently splitting work across segments/nodes

Other Server changes

If you are using the JDBC cache store to persist your cache entries to a database, Infinispan Server now restores the ability to create shared datasources which was lost when we abandoned the WildFly base.

CLI

The CLI received a number of new features such as logging manipulation, obtaining sever reports and user management, superseding the user-tool script.

CLI

Console

Our console overhaul, which started in 10, continues with lots of new features, integrations and polishing. Highlights are:

  • entry creation dialog box

  • querying

  • KeyCloak integration

onsole

Clouds, containers and operators

Our Infinispan Server image is now based on ubi-minimal:8.2.

And thanks to our friends over at Quarkus, Infinispan Server is now also available as a native image built using GraalVM. This image is available on Quay.io and Docker Hub.

The Kubernetes Operator adds a new Cache Custom Resource and the ability to expose services via Ingress and Routes.

Documentation

Documentation has also received a lot of love in all areas:

  • Added procedural content for rolling upgrades, Cache CR with the Operator, server patching, misc CLI commands, using RemoteCacheConfigurationBuilder.

  • Procedural content for different upgrade and migration tasks included in Upgrade Guide.

  • Operator and Spring Boot Starter guides now provide stable and development versions from the index page.

  • Updated index.html and throughout documentation to improve high-level context and aid retrievability.

  • Getting Started content updated and streamlined.

  • Applied several modifications, additions, and removals to documentation via community feedback.

What’s next ?

As briefly mentioned above, Infinispan 12 will be our next release, scheduled for this autumn. We will be working on query/index improvements, backup/restore capabilities as well as the usual load of improvements, clean-ups across the board. We will keep you posted with development release and blogs about upcoming highlights. If you’d like to contribute, just get in touch.

Get it, Use it, Ask us!

Please download, report bugs, chat with us, ask questions on StackOverflow.

Posted by Tristan Tarrant on 2020-06-15
Tags: release

Tuesday, 09 June 2020

Off Heap enhancements

The off heap implementation in Infinispan has become much more widely used since its introduction. There have been some issues and improvements identified to get this storage type more in line with its heap counterpart. For those of you that are unware the off-heap setting is actually only "off" the JVM heap and still resides in the native memory of the application.

The best part of all the below changes is the user does not need to change anything, other than configuring Off Heap storage.

Resizing Off Heap Container

For those of you that have used/configured off heap storage before you may have noticed that there was a configuration option named address count. This setting allowed you to configure how many address pointers the container had. You can think of this as essentially how many buckets you have in a HashMap. Unfortunately the number of pointers was fixed and therefore the user would have to know how many elements they expected to have.

This setting also had another problem. If the user required a larger size of elements this would increase startup time as the container can be iterated upon multiple times when it is empty. Iterating over a container of one million empty pointers would be much slower than iterating over one of only 1024 for example.

I am glad to say as of Infinispan 10.0.0.Final this setting and the performance of iteration have been greatly improved.

Configuration

The address count variable is now ignored and instead the off heap based container will start at smaller amount of "buckets" in the range of 128 or 256. We then apply a load factor of .75, which means we will automatically increase the size of the underlying "buckets" once we have inserted a number of entries being 75% or larger than the current "bucket" size.

The resize operation will grow to have double the amount of "buckets" it had prior. The resize operation will be performed concurrently with other operations, providing minimal blocking as we have locks equal to the number of CPUs times two.

This will allow for a cache with off heap to be started significantly faster and relieves some configuration options that were unneeded. Note that the map, just like a java.util.HashMap, will not decrease the number of "buckets" once it grows to a given size.

Iteration changes

I mentioned that iteration was slower during startup of larger number of "buckets". This was due to it possibly having a large number of them, however it was also plauged by an ineffecient way of iterating over them. In addition to rewriting the resize operation, we have also optimized the memory layout so that "buckets" can be iterated sequentially which provides more mechanical sympathy.

Hash changes

This one is rather short and sweet, but the old hash algorithm we used would cause too many collisions for objects that had hash functions that returned values in a similar range, such as java.lang.Integer and java.util.String (with shared startubg characters).

Therefore it has been changed to provide a bit better spreading. This is part of ISPN 10.0.0.Final.

Expiration bugs

Unfortunately off heap had a few issues with expiration. It didn’t support max idle and expiration metadata was not properly transferred to new nodes during state transfer.

In addition to max idle algorithm being rewritten, Off heap now properly supports max idle as of 10.1.4.Final and 11.0.0.Final.

Off heap metadata transferred to new nodes has been fixed in 10.1.8.Final and 11.0.0.Final.

Posted by William Burns on 2020-06-09
Tags: off-heap storage

News

Tags

JUGs alpha as7 asymmetric clusters asynchronous beta c++ cdi chat clustering community conference configuration console data grids data-as-a-service database devoxx distributed executors docker event functional grouping and aggregation hotrod infinispan java 8 jboss cache jcache jclouds jcp jdg jpa judcon kubernetes listeners meetup minor release off-heap openshift performance presentations product protostream radargun radegast recruit release release 8.2 9.0 final release candidate remote query replication queue rest query security spring streams transactions vert.x workshop 8.1.0 API DSL Hibernate-Search Ickle Infinispan Query JP-QL JSON JUGs JavaOne LGPL License NoSQL Open Source Protobuf SCM administration affinity algorithms alpha amazon anchored keys annotations announcement archetype archetypes as5 as7 asl2 asynchronous atomic maps atomic objects availability aws beer benchmark benchmarks berkeleydb beta beta release blogger book breizh camp buddy replication bugfix c# c++ c3p0 cache benchmark framework cache store cache stores cachestore cassandra cdi cep certification cli cloud storage clustered cache configuration clustered counters clustered locks codemotion codename colocation command line interface community comparison compose concurrency conference conferences configuration console counter cpp-client cpu creative cross site replication csharp custom commands daas data container data entry data grids data structures data-as-a-service deadlock detection demo deployment dev-preview development devnation devoxx distributed executors distributed queries distribution docker documentation domain mode dotnet-client dzone refcard ec2 ehcache embedded embedded query equivalence event eviction example externalizers failover faq final fine grained flags flink full-text functional future garbage collection geecon getAll gigaspaces git github gke google graalvm greach conf gsoc hackergarten hadoop hbase health hibernate hibernate ogm hibernate search hot rod hotrod hql http/2 ide index indexing india infinispan infinispan 8 infoq internationalization interoperability interview introduction iteration javascript jboss as 5 jboss asylum jboss cache jbossworld jbug jcache jclouds jcp jdbc jdg jgroups jopr jpa js-client jsr 107 jsr 347 jta judcon kafka kubernetes lambda language learning leveldb license listeners loader local mode lock striping locking logging lucene mac management map reduce marshalling maven memcached memory migration minikube minishift minor release modules mongodb monitoring multi-tenancy nashorn native near caching netty node.js nodejs non-blocking nosqlunit off-heap openshift operator oracle osgi overhead paas paid support partition handling partitioning performance persistence podcast presentation presentations protostream public speaking push api putAll python quarkus query quick start radargun radegast react reactive red hat redis rehashing releaase release release candidate remote remote events remote query replication rest rest query roadmap rocksdb ruby s3 scattered cache scripting second level cache provider security segmented server shell site snowcamp spark split brain spring spring boot spring-session stable standards state transfer statistics storage store store by reference store by value streams substratevm synchronization syntax highlighting tdc testing tomcat transactions tutorial uneven load user groups user guide vagrant versioning vert.x video videos virtual nodes vote voxxed voxxed days milano wallpaper websocket websockets wildfly workshop xsd xsite yarn zulip

back to top