Class AbstractDelegatingCacheStream<R>
- All Implemented Interfaces:
AutoCloseable,BaseStream<R,,Stream<R>> Stream<R>,BaseCacheStream<R,,Stream<R>> CacheStream<R>
BaseStream, however defined BaseCacheStream methods would be.-
Nested Class Summary
Nested classes/interfaces inherited from interface org.infinispan.BaseCacheStream
BaseCacheStream.SegmentCompletionListenerNested classes/interfaces inherited from interface java.util.stream.Stream
Stream.Builder<T extends Object> -
Field Summary
Fields -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionbooleanbooleanvoidclose()<R1> R1collect(Supplier<R1> supplier, BiConsumer<R1, ? super R> accumulator, BiConsumer<R1, R1> combiner) <R1,A> R1 longcount()Disables tracking of rehash events that could occur to the underlying cache.distinct()distributedBatchSize(int batchSize) Controls how many keys are returned from a remote node when using a stream terminal operation with a distributed cache to back this stream.filterKeys(Set<?> keys) Filters which entries are returned by only returning ones that map to the given key.filterKeySegments(Set<Integer> segments) Filters which entries are returned by what segment they are present in.filterKeySegments(IntSet segments) Filters which entries are returned by what segment they are present in.findAny()<R1> CacheStream<R1>flatMapToDouble(Function<? super R, ? extends DoubleStream> mapper) flatMapToInt(Function<? super R, ? extends IntStream> mapper) flatMapToLong(Function<? super R, ? extends LongStream> mapper) <K,V> void forEach(BiConsumer<Cache<K, V>, ? super R> action) Same asCacheStream.forEach(Consumer)except that it takes aBiConsumerthat provides access to the underlyingCachethat is backing this stream.voidvoidforEachOrdered(Consumer<? super R> action) booleaniterator()limit(long maxSize) <R1> CacheStream<R1>mapToDouble(ToDoubleFunction<? super R> mapper) mapToInt(ToIntFunction<? super R> mapper) mapToLong(ToLongFunction<? super R> mapper) max(Comparator<? super R> comparator) min(Comparator<? super R> comparator) booleanparallel()This would enable sending requests to all other remote nodes when a terminal operator is performed.reduce(BinaryOperator<R> accumulator) reduce(R identity, BinaryOperator<R> accumulator) <U> Ureduce(U identity, BiFunction<U, ? super R, U> accumulator, BinaryOperator<U> combiner) Allows registration of a segment completion listener that is notified when a segment has completed processing.This would disable sending requests to all other remote nodes compared to one at a time.skip(long n) sorted()sorted(Comparator<? super R> comparator) Sets a given time to wait for a remote operation to respond by.Object[]toArray()<A> A[]toArray(IntFunction<A[]> generator) Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
Field Details
-
underlyingStream
-
-
Constructor Details
-
AbstractDelegatingCacheStream
-
-
Method Details
-
mapToInt
Description copied from interface:CacheStream -
mapToLong
Description copied from interface:CacheStream -
mapToDouble
Description copied from interface:CacheStream- Specified by:
mapToDoublein interfaceCacheStream<R>- Specified by:
mapToDoublein interfaceStream<R>- Parameters:
mapper- a non-interfering, stateless function to apply to each element- Returns:
- the new double cache stream
-
flatMapToInt
Description copied from interface:CacheStream- Specified by:
flatMapToIntin interfaceCacheStream<R>- Specified by:
flatMapToIntin interfaceStream<R>- Returns:
- the new cache stream
-
flatMapToLong
Description copied from interface:CacheStream- Specified by:
flatMapToLongin interfaceCacheStream<R>- Specified by:
flatMapToLongin interfaceStream<R>- Returns:
- the new cache stream
-
flatMapToDouble
Description copied from interface:CacheStream- Specified by:
flatMapToDoublein interfaceCacheStream<R>- Specified by:
flatMapToDoublein interfaceStream<R>- Returns:
- the new cache stream
-
sequentialDistribution
Description copied from interface:CacheStreamThis would disable sending requests to all other remote nodes compared to one at a time. This can reduce memory pressure on the originator node at the cost of performance.Parallel distribution is enabled by default except for
CacheStream.iterator()andCacheStream.spliterator()- Specified by:
sequentialDistributionin interfaceBaseCacheStream<R,Stream<R>> - Specified by:
sequentialDistributionin interfaceCacheStream<R>- Returns:
- a stream with parallel distribution disabled.
-
parallelDistribution
Description copied from interface:BaseCacheStreamThis would enable sending requests to all other remote nodes when a terminal operator is performed. This requires additional overhead as it must process results concurrently from various nodes, but should perform faster in the majority of cases.Parallel distribution is enabled by default except for
CacheStream.iterator()andCacheStream.spliterator()- Specified by:
parallelDistributionin interfaceBaseCacheStream<R,Stream<R>> - Specified by:
parallelDistributionin interfaceCacheStream<R>- Returns:
- a stream with parallel distribution enabled.
-
filterKeySegments
Description copied from interface:CacheStreamFilters which entries are returned by what segment they are present in. This method can be substantially more efficient than using a regularCacheStream.filter(Predicate)method as this can control what nodes are asked for data and what entries are read from the underlying CacheStore if present.- Specified by:
filterKeySegmentsin interfaceBaseCacheStream<R,Stream<R>> - Specified by:
filterKeySegmentsin interfaceCacheStream<R>- Parameters:
segments- The segments to use for this stream operation. Any segments not in this set will be ignored.- Returns:
- a stream with the segments filtered.
-
filterKeySegments
Description copied from interface:CacheStreamFilters which entries are returned by what segment they are present in. This method can be substantially more efficient than using a regularCacheStream.filter(Predicate)method as this can control what nodes are asked for data and what entries are read from the underlying CacheStore if present.- Specified by:
filterKeySegmentsin interfaceBaseCacheStream<R,Stream<R>> - Specified by:
filterKeySegmentsin interfaceCacheStream<R>- Parameters:
segments- The segments to use for this stream operation. Any segments not in this set will be ignored.- Returns:
- a stream with the segments filtered.
-
filterKeys
Description copied from interface:CacheStreamFilters which entries are returned by only returning ones that map to the given key. This method will be faster than a regularCacheStream.filter(Predicate)if the filter is holding references to the same keys.- Specified by:
filterKeysin interfaceBaseCacheStream<R,Stream<R>> - Specified by:
filterKeysin interfaceCacheStream<R>- Parameters:
keys- The keys that this stream will only operate on.- Returns:
- a stream with the keys filtered.
-
distributedBatchSize
Description copied from interface:CacheStreamControls how many keys are returned from a remote node when using a stream terminal operation with a distributed cache to back this stream. This value is ignored when terminal operators that don't track keys are used. Key tracking terminal operators areCacheStream.iterator(),CacheStream.spliterator(),CacheStream.forEach(Consumer). Please see those methods for additional information on how this value may affect them.This value may be used in the case of a a terminal operator that doesn't track keys if an intermediate operation is performed that requires bringing keys locally to do computations. Examples of such intermediate operations are
CacheStream.sorted(),CacheStream.sorted(Comparator),CacheStream.distinct(),CacheStream.limit(long),CacheStream.skip(long)This value is always ignored when this stream is backed by a cache that is not distributed as all values are already local.
- Specified by:
distributedBatchSizein interfaceBaseCacheStream<R,Stream<R>> - Specified by:
distributedBatchSizein interfaceCacheStream<R>- Parameters:
batchSize- The size of each batch. This defaults to the state transfer chunk size.- Returns:
- a stream with the batch size updated
-
segmentCompletionListener
public AbstractDelegatingCacheStream<R> segmentCompletionListener(BaseCacheStream.SegmentCompletionListener listener) Description copied from interface:CacheStreamAllows registration of a segment completion listener that is notified when a segment has completed processing. If the terminal operator has a short circuit this listener may never be called.This method is designed for the sole purpose of use with the
CacheStream.iterator()to allow for a user to track completion of segments as they are returned from the iterator. Behavior of other methods is not specified. Please seeCacheStream.iterator()for more information.Multiple listeners may be registered upon multiple invocations of this method. The ordering of notified listeners is not specified.
This is only used if this stream did not invoke
BaseCacheStream.disableRehashAware()and has no flat map based operations. If this is done no segments will be notified.- Specified by:
segmentCompletionListenerin interfaceBaseCacheStream<R,Stream<R>> - Specified by:
segmentCompletionListenerin interfaceCacheStream<R>- Parameters:
listener- The listener that will be called back as segments are completed.- Returns:
- a stream with the listener registered.
-
disableRehashAware
Description copied from interface:CacheStreamDisables tracking of rehash events that could occur to the underlying cache. If a rehash event occurs while a terminal operation is being performed it is possible for some values that are in the cache to not be found. Note that you will never have an entry duplicated when rehash awareness is disabled, only lost values.Most terminal operations will run faster with rehash awareness disabled even without a rehash occuring. However if a rehash occurs with this disabled be prepared to possibly receive only a subset of values.
- Specified by:
disableRehashAwarein interfaceBaseCacheStream<R,Stream<R>> - Specified by:
disableRehashAwarein interfaceCacheStream<R>- Returns:
- a stream with rehash awareness disabled.
-
timeout
Description copied from interface:CacheStreamSets a given time to wait for a remote operation to respond by. This timeout does nothing if the terminal operation does not go remote.If a timeout does occur then a
TimeoutExceptionis thrown from the terminal operation invoking thread or on the next call to theIteratororSpliterator.Note that if a rehash occurs this timeout value is reset for the subsequent retry if rehash aware is enabled.
- Specified by:
timeoutin interfaceBaseCacheStream<R,Stream<R>> - Specified by:
timeoutin interfaceCacheStream<R>- Parameters:
timeout- the maximum time to waitunit- the time unit of the timeout argument- Returns:
- a stream with the timeout set
-
forEach
Description copied from interface:CacheStreamThis operation is performed remotely on the node that is the primary owner for the key tied to the entry(s) in this stream.
NOTE: This method while being rehash aware has the lowest consistency of all of the operators. This operation will be performed on every entry at least once in the cluster, as long as the originator doesn't go down while it is being performed. This is due to how the distributed action is performed. Essentially the
CacheStream.distributedBatchSize(int)value controls how many elements are processed per node at a time when rehash is enabled. After those are complete the keys are sent to the originator to confirm that those were processed. If that node goes down during/before the response those keys will be processed a second time.It is possible to have the cache local to each node injected into this instance if the provided Consumer also implements the
CacheAwareinterface. This method will be invoked before the consumeraccept()method is invoked.This method is ran distributed by default with a distributed backing cache. However if you wish for this operation to run locally you can use the
stream().iterator().forEachRemaining(action)for a single threaded variant. If you wish to have a parallel variant you can useStreamSupport.stream(Spliterator, boolean)passing in the spliterator from the stream. In either case remember you must close the stream after you are done processing the iterator or spliterator.. -
forEach
Description copied from interface:CacheStreamSame asCacheStream.forEach(Consumer)except that it takes aBiConsumerthat provides access to the underlyingCachethat is backing this stream.Note that the
CacheAwareinterface is not supported for injection using this method as the cache is provided in the consumer directly.- Specified by:
forEachin interfaceCacheStream<R>- Type Parameters:
K- key type of the cacheV- value type of the cache- Parameters:
action- consumer to be ran for each element in the stream
-
forEachOrdered
- Specified by:
forEachOrderedin interfaceStream<R>
-
toArray
-
toArray
-
reduce
-
reduce
-
reduce
-
collect
public <R1> R1 collect(Supplier<R1> supplier, BiConsumer<R1, ? super R> accumulator, BiConsumer<R1, R1> combiner) Description copied from interface:CacheStreamNote: The accumulator and combiner are applied on each node until all the local stream's values are reduced into a single object. Because of marshalling limitations, the final result of the collector on remote nodes is limited to a size of 2GB. If you need to process more than 2GB of data, you must force the collector to run on the originator with
CacheStream.spliterator():StreamSupport.stream(stream.filter(entry -> ...) .map(entry -> ...) .spliterator(), false) .collect(Collectors.toList()); -
iterator
Description copied from interface:CacheStreamUsage of this operator requires closing this stream after you are done with the iterator. The preferred usage is to use a try with resource block on the stream.
This method has special usage with the
BaseCacheStream.SegmentCompletionListenerin that as entries are retrieved from the next method it will complete segments.This method obeys the
CacheStream.distributedBatchSize(int). Note that when using methods such asCacheStream.flatMap(Function)that you will have possibly more than 1 element mapped to a given key so this doesn't guarantee that many number of entries are returned per batch.Note that the
Iterator.remove()method is only supported if no intermediate operations have been applied to the stream and this is not a stream created from aCache.values()collection.- Specified by:
iteratorin interfaceBaseStream<R,Stream<R>> - Specified by:
iteratorin interfaceCacheStream<R>- Returns:
- the element iterator for this stream
-
spliterator
Description copied from interface:CacheStreamUsage of this operator requires closing this stream after you are done with the spliterator. The preferred usage is to use a try with resource block on the stream.
- Specified by:
spliteratorin interfaceBaseStream<R,Stream<R>> - Specified by:
spliteratorin interfaceCacheStream<R>- Returns:
- the element spliterator for this stream
-
isParallel
public boolean isParallel()- Specified by:
isParallelin interfaceBaseStream<R,Stream<R>>
-
sequential
Description copied from interface:CacheStream- Specified by:
sequentialin interfaceBaseStream<R,Stream<R>> - Specified by:
sequentialin interfaceCacheStream<R>- Returns:
- a sequential cache stream
-
parallel
Description copied from interface:CacheStream- Specified by:
parallelin interfaceBaseStream<R,Stream<R>> - Specified by:
parallelin interfaceCacheStream<R>- Returns:
- a parallel cache stream
-
unordered
Description copied from interface:CacheStream- Specified by:
unorderedin interfaceBaseStream<R,Stream<R>> - Specified by:
unorderedin interfaceCacheStream<R>- Returns:
- an unordered cache stream
-
onClose
Description copied from interface:CacheStream- Specified by:
onClosein interfaceBaseStream<R,Stream<R>> - Specified by:
onClosein interfaceCacheStream<R>- Returns:
- a cache stream with the handler applied
-
close
public void close()- Specified by:
closein interfaceAutoCloseable- Specified by:
closein interfaceBaseStream<R,Stream<R>>
-
sorted
Description copied from interface:CacheStreamThis operation is performed entirely on the local node irrespective of the backing cache. This operation will act as an intermediate iterator operation requiring data be brought locally for proper behavior. Beware this means it will require having all entries of this cache into memory at one time. This is described in more detail at
CacheStreamAny subsequent intermediate operations and the terminal operation are also performed locally.
-
sorted
Description copied from interface:CacheStreamThis operation is performed entirely on the local node irrespective of the backing cache. This operation will act as an intermediate iterator operation requiring data be brought locally for proper behavior. Beware this means it will require having all entries of this cache into memory at one time. This is described in more detail at
CacheStreamAny subsequent intermediate operations and the terminal operation are then performed locally.
-
peek
Description copied from interface:CacheStream -
limit
Description copied from interface:CacheStreamThis intermediate operation will be performed both remotely and locally to reduce how many elements are sent back from each node. More specifically this operation is applied remotely on each node to only return up to the maxSize value and then the aggregated results are limited once again on the local node.
This operation will act as an intermediate iterator operation requiring data be brought locally for proper behavior. This is described in more detail in the
CacheStreamdocumentationAny subsequent intermediate operations and the terminal operation are then performed locally.
-
skip
Description copied from interface:CacheStreamThis operation is performed entirely on the local node irrespective of the backing cache. This operation will act as an intermediate iterator operation requiring data be brought locally for proper behavior. This is described in more detail in the
CacheStreamdocumentationDepending on the terminal operator this may or may not require all entries or a subset after skip is applied to be in memory all at once.
Any subsequent intermediate operations and the terminal operation are then performed locally.
-
filter
Description copied from interface:CacheStream -
map
Description copied from interface:CacheStreamJust like in the cache,
nullvalues are not supported. -
flatMap
Description copied from interface:CacheStream -
distinct
Description copied from interface:CacheStreamThis operation will be invoked both remotely and locally when used with a distributed cache backing this stream. This operation will act as an intermediate iterator operation requiring data be brought locally for proper behavior. This is described in more detail in the
CacheStreamdocumentationThis intermediate iterator operation will be performed locally and remotely requiring possibly a subset of all elements to be in memory
Any subsequent intermediate operations and the terminal operation are then performed locally.
-
collect
Description copied from interface:CacheStreamNote when using a distributed backing cache for this stream the collector must be marshallable. This prevents the usage of
Collectorsclass. However you can use theCacheCollectorsstatic factory methods to create a serializable wrapper, which then creates the actual collector lazily after being deserialized. This is useful to use any method from theCollectorsclass as you would normally. Alternatively, you can callCacheStream.collect(SerializableSupplier)too.Note: The collector is applied on each node until all the local stream's values are reduced into a single object. Because of marshalling limitations, the final result of the collector on remote nodes is limited to a size of 2GB. If you need to process more than 2GB of data, you must force the collector to run on the originator with
CacheStream.spliterator():StreamSupport.stream(stream.filter(entry -> ...) .map(entry -> ...) .spliterator(), false) .collect(Collectors.toList()); -
min
-
max
-
count
public long count() -
anyMatch
-
allMatch
-
noneMatch
-
findFirst
-
findAny
-