PosixMemMap:71 [12] mmap failed while extracting frames from deepstream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Jetson Xavier NX
• DeepStream Version: 6.0
• JetPack Version (valid for Jetson only) 4.6.1
• TensorRT Version: 8.0 (CUDA 10.2)
• Issue Type( questions, new requirements, bugs): Bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing):
buffer_error.py (2.0 KB)

Starting gstreamer pipeline using the following string:
videotestsrc num-buffers=-1 ! video/x-raw ! nvvideoconvert ! nvstreammux0.sink_0 nvstreammux width=640 height=480 batch-size=1 ! buffer-error ! fakesink sync=false

Where buffer-errror is the custom python element that extracts the image (see attachement) into a numpy array similar to the example app: Image data access application except extracting it each step and it being an element.

After ~64500 images the following error pops up, however it does not hard crash the pipeline:

PosixMemMap:71 [12] mmap failed
nvbufsurface: NvBufSurfaceMap function failed
nvbufsurface: mapping of buffer (0) failed
nvbufsurface: error in mapping
get_nvds_buf_Surface: Failed to map buffer to CPU

Looking at the memory usage using free -m only ~2GB is used and ~6GB is free.
I could not reproduce the error on a normal GPU.

The error seems to happen at the line that extracts the frame_pointer:

frame_pointer = pyds.get_nvds_buf_surface(hash(buffer), frame_meta.batch_id)

As commenting out the copy into np.array still results in the same error.

I found a earlier topic with a similar error: Deepstream Memory Leak
Except I do not get a segmentation fault, it just repeats the error.

Kind regards,
Dik

1 Like

Please use “gst-inspect-1.0 nvstreammux” command to check the output format on src pad, the “video/x-raw(memory:NVMM)” is Nvidia customized HW memory, you can never get the HW buffer into python numpy directly. Please refer to deepstream_python_apps/apps/deepstream-imagedata-multistream at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub of how to get numpy array of the SW buffer correctly.

The string I provided was the minimal string with which I could reproduce the error. In our actual pipeline it happens after a nvstreamdemux element. Looking at nvstreamdemux (src pad) with gst-inspect they give the exact same output as far as I can tell.

If I understand correctly, the sample app extracts the frames to python after the nvmultistreamtiler, which also has as output “video/x-raw(memory:NVMM)”.

I am also confused why it does work for the first ~65k frames on the jetson and on a “normal” GPU it does not crash (for at least 100k frames). Does this mean that it is possible to do it this way on a normal GPU?

edit:

I further investigated the difference between jetson and normal gpu and I found something intresting.
On jetson the virtual memory keeps increasing upto ~ 90g after which it stops and prints the errors.
On gpu the virtual memory does not increase and stays at ~8g, which seems a lot more reasonable.

pyds.get_nvds_buf_surface only support RGBA format. Please make sure your pipeline and the custom plugin work under this limitation.
get_segmentation_masks — Deepstream Deepstream Version: 6.0.1 documentation

As I mentioned the pipeline works on a normal gpu, so our plugin works under those limitation. If you have looked at the python code I provided which should reproduce the error, it does not even need to use the output for the virtual memory creep to start.

I tried to replicate the error on the sample app, however that was not successful, but it did help met to narrow it down to these pipelines:

Working: videotestsrc num-buffers=-1 ! video/x-raw ! nvvideoconvert ! nvstreammux0.sink_0 nvstreammux width=640 height=480 batch-size=1 ! nvvideoconvert nvbuf-memory-type=0 ! video/x-raw(memory:NVMM), format=RGBA ! **nvmultistreamtiler** ! buffer-error ! fakesink sync=false

Memory leak: videotestsrc num-buffers=-1 ! video/x-raw ! nvvideoconvert ! nvstreammux0.sink_0 nvstreammux width=640 height=480 batch-size=1 ! nvvideoconvert nvbuf-memory-type=0 ! **video/x-raw(memory:NVMM), format=RGBA** ! nvstreamdemux nvstreamdemux0.src_0 ! nvvideoconvert nvbuf-memory-type=0 ! video/x-raw(memory:NVMM), format=RGBA ! buffer-error ! fakesink sync=false

  • Note removing buffer-error element results in no memory leak.

Working: videotestsrc num-buffers=-1 ! video/x-raw ! nvvideoconvert ! nvstreammux0.sink_0 nvstreammux width=1920 height=1080 batch-size=1 ! nvvideoconvert nvbuf-memory-type=0 ! nvstreamdemux name=nvstreamdemux nvstreamdemux.src_0 ! nvvideoconvert nvbuf-memory-type=0 ! video/x-raw(memory:NVMM), format=RGBA ! buffer-error ! fakesink sync=false

Doing it this way (the extra nvvideoconvert) does cost quite a lot of performance.

I find it still is weird that it works for the first 65k images, but with a memory leak.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.