Friday, 25 September 2015

Memory based eviction

Eviction Today

Infinispan since its inception has supported a way to help users control how much memory the in memory cache entries consume in the JVM.  This has always been limited to a number of entries.  In the past users have had to estimate the average amount of bytes their entries used on the heap.  With this average you can easily calculate how many entries could safely be stored in memory without running into issues.  For users who have keys and values that are relatively similar this can work well.  However when the case requires entries that vary in size this can be problematic and you end up calculating the average size based on the worse case.

Enter Memory Based Eviction

Infinispan 8 introduces memory based eviction counting.  That is Infinispan will automatically keep track of how large the key, value and overhead if possible.  It can use these values then to try to limit the number of entries instead to a memory count such as 1 Gigabyte.

Key/Value limitations

Unfortunately this is currently limited to only using keys and values stored as primitives, primitive wrappers (ie. Integer), java.lang.String(s) and any of the previously mentioned stored in an array.  This means this feature cannot be used with any custom classes.  If enough feedback is provided we could provide a SPI to allow the user to plug in their own counter for their own classes, but this is not planned currently.

There are a couple ways to easily get around this.  One is to use storeAsBinary, which will store your keys and/or values as byte arrays for you automatically, satisfying this limitation.  A second way is when you are using the client such as HotRod, in this case the data is stored in the serialized (byte[]) form.  Note that compatibility mode will prevent this from occurring and you are unable to use these configurations together.

Eviction Type limitation

Due to the complexity of LIRS, memory based eviction is only supported with LRU at this time. See the types here.  This could be enhanced at a later point, but is also not planned.

How to enable

You can enable memory based eviction either through programmatic or declarative configuration.  Note that Infinispan added long support (limited to 2^48) for the size value which directly helps memory based eviction if users want to utilize caches larger than 2 GB.

Programmatic

Declarative 

==

Supported JVMs

This was tested and written specifically for Oracle and OpenJDK JVMs.  In testing these JVMs showed memory accuracy within 1% of desired value. Other JVMs may shown incorrect values.

The algorithm takes into account JVM options, such as compressed pointers and 32 bit JVM vs 64 bit JVM.  Keep in mind this is only for the data container and doesn’t take into account additional overhead such as created threads or other runtime objects.

Other JVMs are not handled such as the IBM JVM which was briefly tested and showed incorrect numbers greater than 10% of the desired amount.  Support for other JVMs can be added later as interest is shown for them.

Closing Notes

I hope this feature helps people to better handle their memory constraints while using Infinispan!  Let us know if you have any feedback or concerns.

Cheers!

 - Will

Posted by Unknown on 2015-09-25
Tags: eviction memory

Tuesday, 02 July 2013

Lower memory overhead in Infinispan 5.3.0.Final

Infinispan users worried about memory consumption should upgrade to Infinispan 5.3.0.Final as soon as possible, because as part of the work we’ve done to support storing byte arrays without wrappers, and the development of the interoperability mode, we’ve been working to reduce Infinispan’s memory overhead.

To measure overhead, we’ve used Martin Gencur’s excellent memory consumption tests. The results for entries with 512 bytes are:

Infinispan memory overhead, used in library mode: Infinispan 5.2.0.Final: ~151 bytes Infinispan 5.3.0.Final: ~135 bytes Memory consumption reduction: ~12%

Infinispan memory overhead, for the Hot Rod server: Infinispan 5.2.0.Final: ~174 bytes Infinispan 5.3.0.Final: ~151 bytes Memory consumption reduction: ~15%

Infinispan memory overhead, for the REST server: Infinispan 5.2.0.Final: ~208 bytes Infinispan 5.3.0.Final: ~172 bytes Memory consumption reduction: ~21%

Infinispan memory overhead, for the Memcached server:

Infinispan 5.2.0.Final: ~184 bytes

Infinispan 5.3.0.Final: ~180 bytes Memory consumption reduction: ~2%

This is great news for the Infinispan community but our effort doesn’t end here. We’ll be working on further improvements in next releases to bring down cost even further.

Cheers,

Galder

Posted by Galder Zamarreño on 2013-07-02
Tags: overhead memory performance

Monday, 20 May 2013

Storing arrays in Infinispan 5.3 without wrapper objects!

As we head towards the latter part of Infinispan 5.3 series, we’re doing a series of blog posts where we provide more detailed information of some of the key features in this release.

As part of Infinispan 5.3.0.Beta1, we added the ability to store data directly in Infinispan which previously would have required using a custom wrapper object, e.g. arrays. The way Infinispan supports storing these type of objects is by allowing a custom Equivalence function to be configured for keys and/or values.

This is a less cumbersome method that enables object requiring custom equals/hashCode implementations to be stored without incurring on a extra cost per cache entry. We’ve already been using this internally to store Hot Rod, REST and Memcached data where keys and/or values can be byte arrays, and we’ve seen some nice improvements in terms of memory consumption.

A nice side effect of being able to store byte arrays natively is that it makes sharing data between multiple endpoints less cumbersome since you’re now dealing with byte arrays directly instead of having to wrap/unwrap the byte arrays. More on this topic very shortly.

Full details on how to implement and configure these new Equivalence functions can be found in the Infinispan community documentation. To give this a go, make sure you download the latest Infinispan 5.3 release.

Cheers, Galder

Posted by Galder Zamarreño on 2013-05-20
Tags: equivalence memory

Saturday, 12 January 2013

Infinispan memory overhead

Have you ever wondered how much Java heap memory is actually consumed when data is stored in Infinispan cache? Let’s look at some numbers obtained through real measurement.

The strategy was the following:

1) Start Infinispan server in local mode (only one server instance, eviction disabled) 2) Keep calling full garbage collection (via JMX or directly via System.gc() when Infinispan is deployed as a library) until the difference in consumed memory by the running server gets under 100kB between two consecutive runs of GC 3) Load the cache with 100MB of data via respective client (or directly store in the cache when Infinispan is deployed as a library) 4) Keep calling the GC until the used memory is stabilised 5) Measure the difference between the final values of consumed memory after the first and second cycle of GC runs 6) Repeat steps 3, 4 and 5 four times to get an average value (first iteration ignored)

The amount of consumed memory was obtained from a verbose GC log (related JVM options: -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/tmp/gc.log)

The test output looks like this: https://gist.github.com/4512589

The operating system (Ubuntu) as well as JVM (Oracle JDK 1.6) were 64-bit. Infinispan 5.2.0.Beta6. Keys were kept intentionally small (10 character Strings) with byte arrays as values. The target entry size is a sum of key size and value size.

Memory overhead of Infinispan accessed through clients

==

HotRod client

entry size → overall memory

512B       → 137144kB

1kB        → 120184kB

10kB       → 104145kB

1MB        → 102424kB

So how much additional memory is consumed on top of each entry?

entry size/actual memory per entry → overhead per entry

512B/686B                → ~174B

1kB(1024B)/1202B         → ~178B

10kB(10240B)/10414B      → ~176B

1MB(1048576B)/1048821B   → ~245B

MemCached client (text protocol, SpyMemcached client) 

entry size → overall memory

512B       → 139197kB

1kB        → 120517kB

10kB       → 104226kB

1MB        → N/A (SpyMemcached allows max. 20kB per entry)

entry size/actual memory per entry → overhead per entry

512B/696B               → ~184B

1kB(1024B)/1205B        → ~181B

10kB(10240B)/10422B     → ~182B

==

REST client (Content-Type: application/octet-stream)

entry size → overall memory

512B       → 143998kB

1kB        → 122909kB

10kB       → 104466kB

1MB        → 102412kB

entry size/actual memory per entry → overhead per entry

512B/720B               → ~208B

1kB(1024B)/1229B        → ~205B

10kB(10240B)/10446B     → ~206B

1MB(1048576B)/1048698B  → ~123B

The memory overhead for individual entries seems to be more or less constant across different cache entry sizes.

Memory overhead of Infinispan deployed as a library

Infinispan was deployed to JBoss Application Server 7 using Arquillian.

entry size → overall memory/overall with storeAsBinary

512B       → 132736kB / 132733kB

1kB        → 117568kB / 117568kB

10kB       → 103953kB / 103950kB

1MB        → 102414kB / 102415kB

There was almost no difference in overall consumed memory when enabling or disabling storeAsBinary.

entry size/actual memory per entry→ overhead per entry (w/o storeAsBinary)

512B/663B               → ~151B

1kB(1024B)/1175B        → ~151B

10kB(10240B)/10395B     → ~155B

1MB(1048576B)/1048719B  → ~143B

As you can see, the overhead per entry is constant across different entry sizes and is ~151 bytes.

Conclusion

The memory overhead is slightly more than 150 bytes per entry when storing data into the cache locally. When accessing the cache via remote clients, the memory overhead is a little bit higher and ranges from ~170 to ~250 bytes, depending on remote client type and cache entry size. If we ignored the statistics for 1MB entries, which could be affected by a small number of entries (100) stored in the cache, the range would have been even narrower.

Cheers, Martin

Posted by Martin Genčúr on 2013-01-12
Tags: overhead memory performance

Thursday, 22 December 2011

Startup performance

One of the things I’ve done recently was to benchmark how quickly Infinispan starts up.  Specifically looking at LOCAL mode (where you don’t have the delays of opening sockets and discovery protocols you see in clustered mode), I wrote up a very simple test to start up 2000 caches in a loop, using the same cache manager.

This is a pretty valid use case, since when used as a non-clustered 2nd level cache in Hibernate, a separate cache instance is created per entity type, and in the past this has become somewhat of a bottleneck.

In this test, I compared Infinispan 5.0.1.Final, 5.1.0.CR1 and 5.1.0.CR2.  5.1.0 is significantly quicker, but I used this test (and subsequent profiling) to commit a couple of interesting changes in 5.1.0.CR2, which has improved things even more - both in terms of CPU performance as well as memory footprint.

Essentially, 5.1.0.CR1 made use of Jandex to perform annotation scanning of internal components at build-time, to prevent expensive reflection calls to determine component dependencies and lifecycle at runtime.  5.1.0.CR2 takes this concept a step further - now we don’t just cache annotation lookups at build-time, but entire dependency graphs.  And determining and ordering of lifecycle methods are done at build-time too, again making startup times significantly quicker while offering a much tighter memory footprint.

Enough talk.  Here is the test used, and here are the performance numbers, as per my laptop, a 2010 MacBook Pro with an i5 CPU.

Multiverse:InfinispanStartupBenchmark manik [master]$ ./bench.sh  ---- Starting benchmark ---

  Please standby …​ 

Using Infinispan 5.0.1.FINAL (JMX enabled? false)     Created 2000 caches in 10.9 seconds and consumed 172.32 Mb of memory.

Using Infinispan 5.0.1.FINAL (JMX enabled? true)     Created 2000 caches in 56.18 seconds and consumed 315.21 Mb of memory.

Using Infinispan 5.1.0.CR1 (JMX enabled? false)     Created 2000 caches in 7.13 seconds and consumed 157.5 Mb of memory.

Using Infinispan 5.1.0.CR1 (JMX enabled? true)     Created 2000 caches in 34.9 seconds and consumed 243.33 Mb of memory.

Using Infinispan 5.1.0.CR2(JMX enabled? false)     Created 2000 caches in 3.18 seconds and consumed 142.2 Mb of memory.

Using Infinispan 5.1.0.CR2(JMX enabled? true)     Created 2000 caches in 17.62 seconds and consumed 176.13 Mb of memory.

A whopping 3.5 times faster, and significantly more memory-efficient especially when enabling JMX reporting.  :-)

Enjoy! Manik

Posted by Manik Surtani on 2011-12-22
Tags: benchmarks cpu memory performance

News

Tags

JUGs alpha as7 asymmetric clusters asynchronous beta c++ cdi chat clustering community conference configuration console data grids data-as-a-service database devoxx distributed executors docker event functional grouping and aggregation hotrod infinispan java 8 jboss cache jcache jclouds jcp jdg jpa judcon kubernetes listeners meetup minor release off-heap openshift performance presentations product protostream radargun radegast recruit release release 8.2 9.0 final release candidate remote query replication queue rest query security spring streams transactions vert.x workshop 8.1.0 API DSL Hibernate-Search Ickle Infinispan Query JP-QL JSON JUGs JavaOne LGPL License NoSQL Open Source Protobuf SCM administration affinity algorithms alpha amazon anchored keys annotations announcement archetype archetypes as5 as7 asl2 asynchronous atomic maps atomic objects availability aws beer benchmark benchmarks berkeleydb beta beta release blogger book breizh camp buddy replication bugfix c# c++ c3p0 cache benchmark framework cache store cache stores cachestore cassandra cdi cep certification cli cloud storage clustered cache configuration clustered counters clustered locks codemotion codename colocation command line interface community comparison compose concurrency conference conferences configuration console counter cpp-client cpu creative cross site replication csharp custom commands daas data container data entry data grids data structures data-as-a-service deadlock detection demo deployment dev-preview development devnation devoxx distributed executors distributed queries distribution docker documentation domain mode dotnet-client dzone refcard ec2 ehcache embedded embedded query equivalence event eviction example externalizers failover faq final fine grained flags flink full-text functional future garbage collection geecon getAll gigaspaces git github gke google graalvm greach conf gsoc hackergarten hadoop hbase health hibernate hibernate ogm hibernate search hot rod hotrod hql http/2 ide index indexing india infinispan infinispan 8 infoq internationalization interoperability interview introduction iteration javascript jboss as 5 jboss asylum jboss cache jbossworld jbug jcache jclouds jcp jdbc jdg jgroups jopr jpa js-client jsr 107 jsr 347 jta judcon kafka kubernetes lambda language learning leveldb license listeners loader local mode lock striping locking logging lucene mac management map reduce marshalling maven memcached memory migration minikube minishift minor release modules mongodb monitoring multi-tenancy nashorn native near caching netty node.js nodejs non-blocking nosqlunit off-heap openshift operator oracle osgi overhead paas paid support partition handling partitioning performance persistence podcast presentation presentations protostream public speaking push api putAll python quarkus query quick start radargun radegast react reactive red hat redis rehashing releaase release release candidate remote remote events remote query replication rest rest query roadmap rocksdb ruby s3 scattered cache scripting second level cache provider security segmented server shell site snowcamp spark split brain spring spring boot spring-session stable standards state transfer statistics storage store store by reference store by value streams substratevm synchronization syntax highlighting tdc testing tomcat transactions tutorial uneven load user groups user guide vagrant versioning vert.x video videos virtual nodes vote voxxed voxxed days milano wallpaper websocket websockets wildfly workshop xsd xsite yarn zulip

back to top