Pipeline is similar to examples presented in the repository deepstream_python_apps: source decodebin (eg uridecodebin) → nvvideconvert “video/x-raw(memory:NVMM), format=RGBA” → nvstreammux → … → fakesink.
For sink I added a probe where I use pyds.get_nvds_buf_surface (that’s the reason for RGBA conversion)
prepare the video fragment, fragment should be long enough to see the problem (I’ve concatenate sample_720p.h264 10 times using ffmpeg ffmpeg -f concat -safe 0 -i list.txt -c copy sample_720p_x10.h264, where list.txt file sample_720p.h264 x 10)
run attached app ./dsapp_test.py file:///data/sample_720p_x10.h264
run htop and look at the VIRT RES SHR columns for the application, column values will increase rapidly
Testing dsapp_test.py on DGPU t4, there is an error “Error: gst-stream-error-quark: memory type configured and i/p buffer mismatch ip_surf 0 muxer 3 (1): gstnvstreammux.c(619): gst_nvstreammux_chain (): /GstPipeline:pipeline0/GstNvStreamMux:streammux”.
I tested on dGPU NVIDIA GeForce RTX 3070 Laptop GPU, Driver Version: 520.56.06 using docker from nvcr.io/nvidia/deepstream:6.1.1-base and didn’t see any errors or warnings…
I actually have a much larger pipeline, this is just a simple example to reproduce the issue. There are several pad probes in the pipeline, some of them require access to the frame (eg. to make some preprocessing for a model/nvinfer). So I move to “video/x-raw(memory:NVMM), format=RGBA” in the app and suppose I can use pyds.get_nvds_buf_surface… but it doesn’t work properly. Attached app is very simple and valid, isn’t it? I have to understand what’s the problem with the app on jetson, and how to fix it. And I need pad probes, so I can’t just remove pad_buffer_probe.
The code in pad_buffer_probe is similar to the code you mentioned, except n_frame.tobytes() operation. If you comment this operation, only VIRT memory leaks. What does it mean? I can’t use any heavy operations in probe, or copy frame? But there are no such restrictions in the documentation. And VIRT memory leaks anyway
This function returns the frame in NumPy format. Only RGBA format is supported. For x86_64, only unified memory is supported. For Jetson, the buffer is mapped to CPU memory.
that’s why I use nvvideoconvert with “video/x-raw(memory:NVMM), format=RGBA” and set nvbuf-memory-type for dGPU in nvstreammux
Changes to the frame image will be preserved and seen in downstream elements, with the following restrictions. 1. No change to image color format or resolution 2. No transpose operation on the array.
I do neither the first nor the second
The question is the same as at the very beginning, still relevant: what’s wrong with the app on Jetson?
why will you use “n_frame.tobytes()”? in deepstream_python_apps, there is no this kind of using, as you know, it is NumPy usage issue, not deepstream’s issue.
No, I can’t. Сode in the app is simplified as much as possible, it looks absolutely valid, and I don’t understand why memory leaks on NX. That’s why I turned to support, because I can’t handle it myself.
Probably memory is leaking because buffers weren’t unmapped after mapping in get_nvds_buf_surface(). HereNvBufSurfaceMap called but I cannot find NvBufSurfaceUnMap invocations.