After a while I think I found a way to reproduce the RTSP video decoding memory leak. I read about other users having this problem, so it seems to be a common problem. I beg you to try to investigate this with me :)
The pipeline is as follow: rtsp_decode_bin → leaky queue → fake sink.
For you convenience you can find the pipeline graph here: pipeline_rtsp_decode.pdf - Google Drive
Note: as you can see from the pipeline graph, the
fakesink for some reason does not seem to be in a play state. I happy to any suggestion on how to fix this. In any case, I would have expected the leaky queue to drop data in excess.
The following code will allow you to reproduce the issue. It leads to a memory leak in the Deepstream 6.0 devel container on Tesla T4.
How to run the script:
- You need to add your rtsp video url at line 149
- On line 150 you can select how many decoding bin you want to create using the stream above. Set the number of streams to 64. With 64 streams, the memory leak is evident in a few seconds using utilities such as
htop. With just 1 stream you won’t notice the memory leak.
- If you uncomment line 38-39 you will solve the memory leak, but this will lead to corrupted video frames when connecting this video decoding bin to an actual Deepstream pipeline (with nvinfer, etc)
main.py (7.5 KB)
Possibly related issues:
- Memory leak in DeepStream - #15 by Fiona.Chen
- Gstreamer leaky queue stops the pipeline - #3 by Fiona.Chen
- Significant Memory leak when streaming from a clients RTSP source - #4 by karan.shetty
Let me know if you have any question!