Unexpected memory usage in deepstream-test3

I have been testing the deepstream-test3 sample which comes with Deepstream 4.0.1 and I could observe a raise in CPU memory usage of about 10x in something like 10 hours, while processing 10 RTSP streams at once.

I run Deepstream 4.0.1, TensorRT 5.1, CUDA 10.1, CUDNN 7.5, on a RTX2080ti.

Did you notice any similar behavior?
In Deepstream release note it is mentioned that a “small memory leak” has been observed. Could it be related to this?

Thanks in advance,


. Small memory leak observed; fixes in progress.
yes, internal dev team working on this, in progress, please look forward future release, thanks.

Thanks for replying.
Do you have an idea of which Deepstream element is causing the leak?


I have experience the same issue. I tried just to remove the nvinfer plugging and connected the streammux directly to the sink and still the leak is there. I found the source code of the nvinfer but I could not find the nvstreammux one. There is any chance to get it to do my proper debug.



nvstreammux is not open sourced

Is this resolved in deedpstream sdk 4.02 as I have similar problems with an app I’m building based on deepstream-test3 and memory usage keeps increasing…

you can see 4.0.2 release notice, https://docs.nvidia.com/metropolis/deepstream/DeepStream_4.0.2_Release_Notes.pdf
 Small memory leak observed; fixes in progress.

ok- so it looks like deepstream 4.02 DOES NOT fix the memory leak issue. The release notes only says this: “Small memory leak observed; fixes in progress.”.

So is this a widespread deepstream issue or something specific only to test app 3 ?

How will fixes be delivered? Are we going to have to wait a few months for deepstream sdk 4.03? Or will files be provided to hot-fix ?

Internal team is working on it, please wait, thanks.

Any updates on this low-level memory leak??


see this comment.

yes I’ve read that - its in the same thread we are talking in. It just says its a more widespread issue in deepstream (and therefore absolutely critical to users) and not specific to test app 3 - this is why I’meagerly awaiting an update on progress… Even a rough ETA of when a fix will be in place will help us? Currently we are totally in the dark.

We suspect that issue is coming from open source components and We would plan to address it in DS 5.0

Thankyou @amycao.

So as a rough idea when will DS 5.0 be released? Are you targeting a specific month this year?

What can we do in the meantime - are there any work-arounds or specific elements we could avoid. My program needs to run all day, every day - so do we have to do stuff like routinely kill it and restart it to stop running out of memory? what do you suggest is the best practice way to handle this? Maybe others on the forum could help with how they tackle these issues?

Why is nvidia not willing to offer us developers details of when new releases are coming out? I’ve asked these sort of questions before and the “line goes silent” every time.

If we are commiting to using Jetson and Deepstream we need a roadmap.

When there are serious bugs like memory leaks and you say it will be fixed in the next version - we need to know WHEN that is or we lose faith in the product and need to look at alternatives.

Shall I create a new thread for this and add a poll?

Sorry for a late reply.
public release of 5.0 should be around Middle June
yes . App needs to be restarted by automated script if required or can be restarted in routine maintainance work at now.

Thankyou for the June info - that helps us plan. Is there a feature list you are working toward for this release?

I have a related question - when I try to debug my deepstream-test3 based app with valgrind - it crashes and says you’ve ‘achieved the impossible’. Do you have any tips with using valgrind to detect memory issue with deepstream/gstreamer?

Here is one link about valgrind FAQ, hope you can get some clue Valgrind

HAve been through the valgrind doco but no hope - I think the issue is that I’m actually developing directly on the jetson nano and its just too resource constrained to run my app with 4 sources and 4 sinks inside of valgrind.
What is a good development workflow - build on a desktop with GPU first so that you can use the full power of tools like valgrind and then convert to run on the nano later ??