The next Alpha release of the upcoming major version release, is ready to get tested by you.
We are very confinced that the API is quite stable at this point and so you should not expect to many API breakages anymore. So if you plan to start a new project it may be the right time to you on focus on Netty 4.0.0 for it.
For all fixed bugs and changes please checkout our issue tracker.
You can find the release on the download page, or just grab it via maven as usual.
Please let us know if you have any problems or questions. And as by the way, we love pull-requests / contributions
We really hope you will enjoy this new release as much as we do
This release ships the workaround for the epoll(..) bug that we also released as part of 3.5.7.Final!
This mentioned epoll(..) bug can lead to very excessive cpu-spinning, which will most likely produce 100% cpu usage on the core that runs the thread of the NioEventLoop. For more informations please see #327, #565.
The workaround is disabled by default at the moment. This is because we want to be sure its "mature" enough before enable it by default. Anyway, if you got hit by the bug enable the workaround via the org.jboss.netty.epoolBugWorkaround System Property.
Also we had some important bug-fixes and improvements in the AIO transport( mostly driven by Kaazing )and WebSocket bugfixes (mostly driven by Twitter). Thank you
If you plan to upgrade from previous 4.0.0 Alpha releases be aware of one little api breakage which will probably effect you. Its about the Bootstrap and ServerBootstrap.. The breakage was needed to allow the reuse of Bootstrap and ServerBootstrap instances.
The old way of configure it:
The new one in 4.0.0.Alpha4:
The important change here is that the Bootstrap.channel(..) method now takes a Class argument, before it was the actual instance. This allows the Bootstrap to create instances by its own when Boostream.bind(..) is called. If your Channel implementation does not have a default constructor or you need to pass more informations to the Channel when its created, just use the new Bootstrap.channel(..) method that allows to get a Bootstrap.ChannelFactory instances to get passed.
If you upgrade from a 3.x release be sure to read the complete summary of changes.
Netty 4.0.0.Alpha4 comes with SCTP support included. This will only work on Operation Systems that ship with SCTP support (for example linux). Also be sure that you install the user-space libs that you need to access it.
For example on Ubuntu this would be done via:
# apt-get install libsctp1
One fedora you will need some extra kernel modules:
# yum install kernel-modules-extra.x86_64 lksctp-tools.x86_64
After that you should be able to give it a spin. You can find the example code here for Client and Server implementations. Thanks again to Jestan Nirojan for porting it over to the new API.
Another change to worth mention was contributed by Daniel Bevenius. This change makes it even easier to write WebSocket Server with Netty as it will help to do all the ground work for you. This allows you to only focus on the core implementation. Before you had to write code for doing the handshake and also handle all the other basics( like handle the CloseWebSocketFrame and PingWebSocketFrame). All of these boiler-blade code is gone now!
Checkout a basic example of using the new handler:
Sometimes it can be helpful for you to suspend reads on channel, this allows you to not need to buffer all of the data in your application code till you are ready to process again. This feature was present in 3.x also but was a bit error-phrone so we thought we should give it some more love to make it a bit more useful.
In 3.x you would just set the Channel readable/non-readable:
The problem with this was that it was easy to "break" things when one ChannelHandler disabled the Channel to read and some other enabled it again because of some intern logic which was not aware of the other ChannelHandler that was suspend reads before.
To overcome this problem we changed the implementation to be slightly different. Now you need to suspend reads / continue reads on the actual ChannelHandlerContext which is bound to the ChannelHandler. If at least one ChannelHandlerContext has reads suspend it will handled by the Channel this way. This way you don't need to worry about other ChannelHandler in the pipeline and about the changes they make.
But that's not all Its now possible to suspend accepting more connections/channels when you use Netty on the Server-side. For this just use something like that: