Tuesday, 02 July 2013

Lower memory overhead in Infinispan 5.3.0.Final

Infinispan users worried about memory consumption should upgrade to Infinispan 5.3.0.Final as soon as possible, because as part of the work we’ve done to support storing byte arrays without wrappers, and the development of the interoperability mode, we’ve been working to reduce Infinispan’s memory overhead.

To measure overhead, we’ve used Martin Gencur’s excellent memory consumption tests. The results for entries with 512 bytes are:

Infinispan memory overhead, used in library mode: Infinispan 5.2.0.Final: ~151 bytes Infinispan 5.3.0.Final: ~135 bytes Memory consumption reduction: ~12%

Infinispan memory overhead, for the Hot Rod server: Infinispan 5.2.0.Final: ~174 bytes Infinispan 5.3.0.Final: ~151 bytes Memory consumption reduction: ~15%

Infinispan memory overhead, for the REST server: Infinispan 5.2.0.Final: ~208 bytes Infinispan 5.3.0.Final: ~172 bytes Memory consumption reduction: ~21%

Infinispan memory overhead, for the Memcached server:

Infinispan 5.2.0.Final: ~184 bytes

Infinispan 5.3.0.Final: ~180 bytes Memory consumption reduction: ~2%

This is great news for the Infinispan community but our effort doesn’t end here. We’ll be working on further improvements in next releases to bring down cost even further.

Cheers,

Galder

Posted by Galder Zamarreño on 2013-07-02
Tags: overhead memory performance

Saturday, 12 January 2013

Infinispan memory overhead

Have you ever wondered how much Java heap memory is actually consumed when data is stored in Infinispan cache? Let’s look at some numbers obtained through real measurement.

The strategy was the following:

1) Start Infinispan server in local mode (only one server instance, eviction disabled) 2) Keep calling full garbage collection (via JMX or directly via System.gc() when Infinispan is deployed as a library) until the difference in consumed memory by the running server gets under 100kB between two consecutive runs of GC 3) Load the cache with 100MB of data via respective client (or directly store in the cache when Infinispan is deployed as a library) 4) Keep calling the GC until the used memory is stabilised 5) Measure the difference between the final values of consumed memory after the first and second cycle of GC runs 6) Repeat steps 3, 4 and 5 four times to get an average value (first iteration ignored)

The amount of consumed memory was obtained from a verbose GC log (related JVM options: -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/tmp/gc.log)

The test output looks like this: https://gist.github.com/4512589

The operating system (Ubuntu) as well as JVM (Oracle JDK 1.6) were 64-bit. Infinispan 5.2.0.Beta6. Keys were kept intentionally small (10 character Strings) with byte arrays as values. The target entry size is a sum of key size and value size.

Memory overhead of Infinispan accessed through clients

==

HotRod client

entry size → overall memory

512B       → 137144kB

1kB        → 120184kB

10kB       → 104145kB

1MB        → 102424kB

So how much additional memory is consumed on top of each entry?

entry size/actual memory per entry → overhead per entry

512B/686B                → ~174B

1kB(1024B)/1202B         → ~178B

10kB(10240B)/10414B      → ~176B

1MB(1048576B)/1048821B   → ~245B

MemCached client (text protocol, SpyMemcached client) 

entry size → overall memory

512B       → 139197kB

1kB        → 120517kB

10kB       → 104226kB

1MB        → N/A (SpyMemcached allows max. 20kB per entry)

entry size/actual memory per entry → overhead per entry

512B/696B               → ~184B

1kB(1024B)/1205B        → ~181B

10kB(10240B)/10422B     → ~182B

==

REST client (Content-Type: application/octet-stream)

entry size → overall memory

512B       → 143998kB

1kB        → 122909kB

10kB       → 104466kB

1MB        → 102412kB

entry size/actual memory per entry → overhead per entry

512B/720B               → ~208B

1kB(1024B)/1229B        → ~205B

10kB(10240B)/10446B     → ~206B

1MB(1048576B)/1048698B  → ~123B

The memory overhead for individual entries seems to be more or less constant across different cache entry sizes.

Memory overhead of Infinispan deployed as a library

Infinispan was deployed to JBoss Application Server 7 using Arquillian.

entry size → overall memory/overall with storeAsBinary

512B       → 132736kB / 132733kB

1kB        → 117568kB / 117568kB

10kB       → 103953kB / 103950kB

1MB        → 102414kB / 102415kB

There was almost no difference in overall consumed memory when enabling or disabling storeAsBinary.

entry size/actual memory per entry→ overhead per entry (w/o storeAsBinary)

512B/663B               → ~151B

1kB(1024B)/1175B        → ~151B

10kB(10240B)/10395B     → ~155B

1MB(1048576B)/1048719B  → ~143B

As you can see, the overhead per entry is constant across different entry sizes and is ~151 bytes.

Conclusion

The memory overhead is slightly more than 150 bytes per entry when storing data into the cache locally. When accessing the cache via remote clients, the memory overhead is a little bit higher and ranges from ~170 to ~250 bytes, depending on remote client type and cache entry size. If we ignored the statistics for 1MB entries, which could be affected by a small number of entries (100) stored in the cache, the range would have been even narrower.

Cheers, Martin

Posted by Martin Genčúr on 2013-01-12
Tags: overhead memory performance

News

Tags

JUGs alpha as7 asymmetric clusters asynchronous beta c++ cdi chat clustering community conference configuration console data grids data-as-a-service database devoxx distributed executors docker event functional grouping and aggregation hotrod infinispan java 8 jboss cache jcache jclouds jcp jdg jpa judcon kubernetes listeners meetup minor release off-heap openshift performance presentations product protostream radargun radegast recruit release release 8.2 9.0 final release candidate remote query replication queue rest query security spring streams transactions vert.x workshop 8.1.0 API DSL Hibernate-Search Ickle Infinispan Query JP-QL JSON JUGs JavaOne LGPL License NoSQL Open Source Protobuf SCM administration affinity algorithms alpha amazon anchored keys annotations announcement archetype archetypes as5 as7 asl2 asynchronous atomic maps atomic objects availability aws beer benchmark benchmarks berkeleydb beta beta release blogger book breizh camp buddy replication bugfix c# c++ c3p0 cache benchmark framework cache store cache stores cachestore cassandra cdi cep certification cli cloud storage clustered cache configuration clustered counters clustered locks codemotion codename colocation command line interface community comparison compose concurrency conference conferences configuration console counter cpp-client cpu creative cross site replication csharp custom commands daas data container data entry data grids data structures data-as-a-service deadlock detection demo deployment dev-preview development devnation devoxx distributed executors distributed queries distribution docker documentation domain mode dotnet-client dzone refcard ec2 ehcache embedded embedded query equivalence event eviction example externalizers failover faq final fine grained flags flink full-text functional future garbage collection geecon getAll gigaspaces git github gke google graalvm greach conf gsoc hackergarten hadoop hbase health hibernate hibernate ogm hibernate search hot rod hotrod hql http/2 ide index indexing india infinispan infinispan 8 infoq internationalization interoperability interview introduction iteration javascript jboss as 5 jboss asylum jboss cache jbossworld jbug jcache jclouds jcp jdbc jdg jgroups jopr jpa js-client jsr 107 jsr 347 jta judcon kafka kubernetes lambda language learning leveldb license listeners loader local mode lock striping locking logging lucene mac management map reduce marshalling maven memcached memory migration minikube minishift minor release modules mongodb monitoring multi-tenancy nashorn native near caching netty node.js nodejs non-blocking nosqlunit off-heap openshift operator oracle osgi overhead paas paid support partition handling partitioning performance persistence podcast presentation presentations protostream public speaking push api putAll python quarkus query quick start radargun radegast react reactive red hat redis rehashing releaase release release candidate remote remote events remote query replication rest rest query roadmap rocksdb ruby s3 scattered cache scripting second level cache provider security segmented server shell site snowcamp spark split brain spring spring boot spring-session stable standards state transfer statistics storage store store by reference store by value streams substratevm synchronization syntax highlighting tdc testing tomcat transactions tutorial uneven load user groups user guide vagrant versioning vert.x video videos virtual nodes vote voxxed voxxed days milano wallpaper websocket websockets wildfly workshop xsd xsite yarn zulip

back to top