Skip to content

Netty leak warning from AWSSDK #12108

@qqmyers

Description

@qqmyers

What steps does it take to reproduce the issue?
At QDR, I've occasionally seen the following warnings in the log

[2026-01-21T18:07:07.676+0000] [Payara 6.2025.11] [SEVERE] [] [io.netty.util.ResourceLeakDetector] [tid: _ThreadID=3795 _ThreadName=aws-java-sdk-NettyEventLoop-1-1] [timeMillis: 1769018827676] [levelValue: 1000] [[
LEAK: ByteBuf.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information.
io.netty.handler.codec.http.DefaultHttpContent.touch(DefaultHttpContent.java:86)
io.netty.handler.codec.http.DefaultLastHttpContent.touch(DefaultLastHttpContent.java:123)
io.netty.handler.codec.http.DefaultLastHttpContent.touch(DefaultLastHttpContent.java:30)
io.netty.channel.DefaultChannelPipeline.touch(DefaultChannelPipeline.java:115)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:417)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
software.amazon.awssdk.http.nio.netty.internal.nrs.HttpStreamsHandler.handleReadHttpContent(HttpStreamsHandler.java:227)
software.amazon.awssdk.http.nio.netty.internal.nrs.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:203)
software.amazon.awssdk.http.nio.netty.internal.nrs.HttpStreamsClientHandler.channelRead(HttpStreamsClientHandler.java:173)
...
While it's possible they are from some QDR custom code, I thought I'd post the issue and see if anyone else has seen it.

See #12109 for things we've addressed at QDR that, while probably useful on their own, don't seem to fix this issue.

  • When does this issue occur?
    FWIW: despite adding
-Dio.netty.leakDetection.level=advanced
-Dio.netty.leakDetection.targetRecords=40

I have yet to see anything that indicates what part of our code is being run when the error occurs. I have seen it during archiving of a dataset with ~2K files - usually ~once. Archiving uses the presigned download URLs which would involve the S3 store/the awssdk. Whether that's the trigger or something else it being called on the server at the same time, I don't yet know.

  • Which page(s) does it occurs on?

  • What happens?

  • To whom does it occur (all users, curators, superusers)?

  • What did you expect to happen?

Which version of Dataverse are you using?
~6.9

Any related open or closed issues to this bug report?

Screenshots:

No matter the issue, screenshots are always welcome.

To add a screenshot, please use one of the following formats and/or methods described here:

Are you thinking about creating a pull request for this issue?
Help is always welcome, is this bug something you or your organization plan to fix?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions