Memory leak on Xavier NX

Pipeline is similar to examples presented in the repository deepstream_python_apps: source decodebin (eg uridecodebin) → nvvideconvert “video/x-raw(memory:NVMM), format=RGBA” → nvstreammux → … → fakesink.
For sink I added a probe where I use pyds.get_nvds_buf_surface (that’s the reason for RGBA conversion)

• Hardware Platform: Jetson Xavier NX
• DeepStream Version: 6.1.1
• JetPack Version: 5.0.2

To reproduce the issue:

  1. prepare environment (see HOWTO)
  2. prepare the video fragment, fragment should be long enough to see the problem (I’ve concatenate sample_720p.h264 10 times using ffmpeg ffmpeg -f concat -safe 0 -i list.txt -c copy sample_720p_x10.h264, where list.txt file sample_720p.h264 x 10)
  3. run attached app ./dsapp_test.py file:///data/sample_720p_x10.h264
  4. run htop and look at the VIRT RES SHR columns for the application, column values will increase rapidly

dsapp_test.py (4.7 KB)

PS: There is no such problem with this app on RTX 2080/3070.

I will check.

  1. could you provide more memory log according to DeepStream SDK FAQ - #14 by mchi?
  2. Testing dsapp_test.py on DGPU t4, there is an error “Error: gst-stream-error-quark: memory type configured and i/p buffer mismatch ip_surf 0 muxer 3 (1): gstnvstreammux.c(619): gst_nvstreammux_chain (): /GstPipeline:pipeline0/GstNvStreamMux:streammux”.
  1. 1.log (3.1 KB)
  2. I tested on dGPU NVIDIA GeForce RTX 3070 Laptop GPU, Driver Version: 520.56.06 using docker from nvcr.io/nvidia/deepstream:6.1.1-base and didn’t see any errors or warnings…

on xavier, when I remove probe pad_buffer_probe, there will be no leak, is this custom code? can you narrow down this issue?

I actually have a much larger pipeline, this is just a simple example to reproduce the issue. There are several pad probes in the pipeline, some of them require access to the frame (eg. to make some preprocessing for a model/nvinfer). So I move to “video/x-raw(memory:NVMM), format=RGBA” in the app and suppose I can use pyds.get_nvds_buf_surface… but it doesn’t work properly. Attached app is very simple and valid, isn’t it? I have to understand what’s the problem with the app on jetson, and how to fix it. And I need pad probes, so I can’t just remove pad_buffer_probe.

the code in pad_buffer_probe will cause leak, please check which line will cause leak, about get_nvds_buf_surface, you can refer to python sample: deepstream_python_apps/deepstream_imagedata-multistream.py at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

The code in pad_buffer_probe is similar to the code you mentioned, except n_frame.tobytes() operation. If you comment this operation, only VIRT memory leaks. What does it mean? I can’t use any heavy operations in probe, or copy frame? But there are no such restrictions in the documentation. And VIRT memory leaks anyway

here is the doc about get_nvds_buf_surface get_segmentation_masks — Deepstream Deepstream Version: 6.0.1 documentation
please check how to use NumPy .

This function returns the frame in NumPy format. Only RGBA format is supported. For x86_64, only unified memory is supported. For Jetson, the buffer is mapped to CPU memory.

  • that’s why I use nvvideoconvert with “video/x-raw(memory:NVMM), format=RGBA” and set nvbuf-memory-type for dGPU in nvstreammux

Changes to the frame image will be preserved and seen in downstream elements, with the following restrictions. 1. No change to image color format or resolution 2. No transpose operation on the array.

  • I do neither the first nor the second

The question is the same as at the very beginning, still relevant: what’s wrong with the app on Jetson?

why will you use “n_frame.tobytes()”? in deepstream_python_apps, there is no this kind of using, as you know, it is NumPy usage issue, not deepstream’s issue.

You are ignoring the fact that VIRT memory is still leaking (if you remove “n_frame.tobytes()”).

But ok, replace the wrong row “n_frame.tobytes()” with valid one, eg “frame_copy = np.array(n_frame, copy=True, order=‘C’)”. It doesn’t change anything, memory leaks

can you simplify your code to find which line will cause VIRT memory leak? please refer to sample deepstream_test_1.py, which has on memory leak.

No, I can’t. Сode in the app is simplified as much as possible, it looks absolutely valid, and I don’t understand why memory leaks on NX. That’s why I turned to support, because I can’t handle it myself.

your code is similar to deepstream_test_3.py, please narrow down the issue by comparing the code and simplifying deepstream_test_3.py.

deepstream_test_3.py is different, no conversion to RGBA that’s required by pyds.get_nvds_buf_surface. And the issue is exactly with RGBA

please refer to cb_newpad of deesptream-test3, move nvvidconv, streammux to main function.

Probably memory is leaking because buffers weren’t unmapped after mapping in get_nvds_buf_surface(). Here NvBufSurfaceMap called but I cannot find NvBufSurfaceUnMap invocations.

you are right, from the official doc, need to do NvBufSurfaceUnMap after NvBufSurfaceMap, here is the link : NVIDIA DeepStream SDK API Reference: Buffer Surface Management API
deepstream_python_apps is opensource, you can fix it by yourself first.

@fanzh I’ve submitted a PR Add binding to unmap NvBuSurface if it has been mapped by tomskikh · Pull Request #6 · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

Memory doesn’t leak if I unmap buffers after using get_nvds_buf_surface.

1 Like