Netty 4.1.75.Final released
We are happy to announce the release of netty 4.1.75.Final. This release is a bug-fix release but also contains two changes which might change memory usage / performance characteristics of your application. Please ensure to read the important notes section below.
The most important changes are:
- Avoid reading uninitialised memory when making domain socket address (#12085)
- HTTP/2 header validation must reject duplicate pseudo-headers (#12094)
- Add HTTP-specific TooLongFrameExceptions (#12084)
- Allow attaching additional metadata to ResourceLeakTrackers on creation (#12091)
- Add PlatformDependent.estimateMaxDirectMemory (#12118)
- Reduce the default PooledByteBufAllocator chunk size from 16 MiB to 4 MiB (#12108)
- Fixed high CPU usage when using spliceTo() (#12138)
- Change default of io.netty.allocator.useCacheForAllThreads to false (#12109)
- Allow more permissive char set in SNI headers (#12147)
- Fix race when handling delegating tasks in ReferenceCountedOpenSslEngine (#12149)
For the details and all changes, please browse our issue tracker for 4.1.75.Final.
Important notes
Reduce the default PooledByteBufAllocator chunk size from 16 MiB to 4 MiB
To reduce the memory-overhead of the PooledByteBufAllocator
we dedicided to change the default chunk size from 16 MiB to 4 MiB. Generally this should not have any bad effects in terms of performance for almost all users. If you notice a drop in performance for your use-case please open an issue and consider changing the configuration as explained in (#12108).
Change default of io.netty.allocator.useCacheForAllThreads to false
Netty uses various types of caches to improve performance when allocation buffers, on of them is thread local based. While the thread-local based cache can reduce condention in general it can also use al ot of memory when applications allocate buffers from a lot of different threads. To reduce the risk of ending up with a lot of memory usage without much gain we dedicided to change the default configuration on when to use these caches. From this release on we will only use this type of cache if either it is configured to always use the cache, the allocating Thread
is of type FastThreadLocalThread
or if the Thread
is bound to an EventExecutor
. This should be good enough in general while decrease the memory overhead a lot. If you observe any issues consider opening an issue and change the configuration as stated in (#12109).
Thank You
Every idea and bug-report counts and so we thought it is worth mentioning those who helped in this area.
Please report an unintended omission.