public class AdaptiveRecvByteBufAllocator extends DefaultMaxMessagesRecvByteBufAllocator
RecvByteBufAllocatorthat automatically increases and decreases the predicted buffer size on feed back.
It gradually increases the expected number of readable bytes if the previous read fully filled the allocated buffer. It gradually decreases the expected number of readable bytes if the read operation was not able to fill a certain amount of the allocated buffer two times consecutively. Otherwise, it keeps returning the same prediction.
|Modifier and Type||Field and Description|
There is state for
|Constructor and Description|
Creates a new predictor with the default parameters.
Creates a new predictor with the specified parameters.
|Modifier and Type||Method and Description|
Creates a new handle.
Determine if future instances of
maxMessagesPerRead, maxMessagesPerRead, respectMaybeMoreData
@Deprecated public static final AdaptiveRecvByteBufAllocator DEFAULT
DefaultMaxMessagesRecvByteBufAllocator.maxMessagesPerRead()which is typically based upon channel type.
1024, does not go down below
64, and does not go up above
public AdaptiveRecvByteBufAllocator(int minimum, int initial, int maximum)
minimum- the inclusive lower bound of the expected buffer size
initial- the initial buffer size when no feed back was received
maximum- the inclusive upper bound of the expected buffer size
public RecvByteBufAllocator.Handle newHandle()
public AdaptiveRecvByteBufAllocator respectMaybeMoreData(boolean respectMaybeMoreData)
RecvByteBufAllocator.newHandle()will stop reading if we think there is no more data.
trueto stop reading if we think there is no more data. This may save a system call to read from the socket, but if data has arrived in a racy fashion we may give up our
DefaultMaxMessagesRecvByteBufAllocator.maxMessagesPerRead()quantum and have to wait for the selector to notify us of more data.
falseto keep reading (up to
DefaultMaxMessagesRecvByteBufAllocator.maxMessagesPerRead()) or until there is no data when we attempt to read.
Copyright © 2008–2022 The Netty Project. All rights reserved.