Nvbufsurftransform_copy.cpp failed in mem copy

Running the following pipeline:

USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 rtspsrc drop-on-latency=True latency=3000 protocols=tcp timeout=0 tcp-timeout=0 teardown-timeout=0 location='rtsp://<user>:<pwd>@<cam-ip>:<port>' ! rtph264depay ! h264parse ! tee ! queue ! decodebin ! tee ! m.sink_0 nvstreammux name=m batch-size=1 sync-inputs=True ! queue ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=(string)RGBA' ! queue ! nvmultistreamtiler rows=1 columns=1 width=1280 height=720 ! nvdsosd process-mode=1 ! nvvideoconvert ! x264enc tune=zerolatency ! h264parse ! qtmux ! filesink location=video.mp4

where rtspsrc streams from a camera and save to file. Our complete pipeline includes nvinfer and nvtracker elements between the first nvvideoconvert and nvmultistreamtiler, but since the issue can be reproduced without these two elements, we omitted it here for convenience, but kept the two nvvideoconvert.

The pipeline usually runs for about ten minutes and would crash into the following failure:

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:438: => Failed in mem copy

libnvosd (1386):(ERROR) : cuGraphicsEGLRegisterImage failed : 700 
ERROR: from element /GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0: Unable to draw shapes onto video frame by GPU
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvdsosd/gstnvdsosd.c(645): gst_nvds_osd_transform_ip (): /GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0
Execution ended after 0:01:29.781287790
Setting pipeline to NULL ...

The exact time it crashes is not deterministic, ranging from about 1 to 10 minutes.

We have tested on Jetson Orin Nano devices with JetPack 6.2 and 6.1, both having the same issue. The docker image in use is nvcr.io/nvidia/deepstream-l4t:7.1-triton-multiarch, we installed x264 elements by running:

apt-get update \
    && apt-get install -y --no-install-recommends \
    libcairo2-dev \
    python3-opencv \
    ffmpeg \
    && apt-get install -y --reinstall --no-install-recommends \
    libx264-dev \
    libx264-163 \
    gstreamer1.0-plugins-ugly \
    libavcodec58 \
    libavutil56 \
    libvpx7 \
    libmp3lame0 \
    libx265-199 \
    libxvidcore4 \
    libmpg123-0 \
    libflac8

Complete system platform info:

  • JetPack: 6.2
  • Release: 5.15.148
  • CUDA: 12.6.68
  • cuDNN: 9.3.0.75
  • TensorRT: 10.3.0.30
  • VPI: 3.2.4
  • Vulkan: 1.3.204

The JetPack 6.1 we tested has all the same specs as above except for the JetPack version.

The workaround provided in Failed in mem copy - #24 by yuweiw to use nvvideoconvert compute-hw=1 nvbuf-memory-type=3 didn’t work for our pipeline, it runs into error:

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:3929: => Transformation Failed -1

We need the pipeline to stably run at least one to two hours to save a video file long enough for our purpose. Please advice what’s the best approach to fix or workaround the issue. Thank you.

1 Like

Could you try to add a nvvideoconvert between nvmultistreamtiler and nvdsosd and use CPU mode for nvdsosd first? We’ll check this GPU mode issue for nvdsosd.

tested the suggested pipeline like this:

USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 rtspsrc drop-on-latency=True latency=3000 protocols=tcp timeout=0 tcp-timeout=0 teardown-timeout=0 location='rtsp://<user>:<pwd>@<cam-ip>:<port>' ! rtph264depay ! h264parse ! tee ! queue ! decodebin ! tee ! m.sink_0 nvstreammux name=m batch-size=1 sync-inputs=True ! queue ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=(string)RGBA' ! queue ! nvmultistreamtiler rows=1 columns=1 width=1280 height=720 ! nvvideoconvert ! nvdsosd process-mode=0 ! nvvideoconvert ! x264enc tune=zerolatency ! h264parse ! qtmux ! filesink location=video.mp4

with nvvideoconvert ! nvdsosd process-mode=0

But still running into the same error after running for about 10 minutes

We’ll check this issue ASAP. Thanks

Hi @qionghu , we’ll fix this issue in the future version. Could you try it out first using the following workaround?
Just set the copy-hw=2 for all the nvvideoconvert plugins you used.

USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 rtspsrc \
drop-on-latency=True latency=3000 protocols=tcp timeout=0 \
 tcp-timeout=0 teardown-timeout=0 location='rtsp://<user>:<pwd>@<cam-ip>:<port>' ! \
rtph264depay ! h264parse ! tee ! queue ! decodebin ! \
tee ! m.sink_0 nvstreammux name=m batch-size=1 \
sync-inputs=True ! queue ! nvvideoconvert copy-hw=2 ! \
'video/x-raw(memory:NVMM),format=(string)RGBA' ! queue ! nvmultistreamtiler rows=1 columns=1 width=1280 height=720 ! \
nvdsosd process-mode=1 ! nvvideoconvert copy-hw=2 ! x264enc tune=zerolatency ! \
h264parse ! qtmux ! filesink location=video.mp4
3 Likes

Thank you, we will test it.

We have tested the suggestion, it works when the input rtsp stream is using a real physical camera, but there are still failures if it’s an rtsp stream created from a local video using this command:

ffmpeg -re -stream_loop -1 -i "$video_file_name" -c copy -f rtsp "rtsp://localhost:$rtsp_port"

from inside the docker container bluenviron/mediamtx:1.8.0-ffmpeg.

Can you help look into the issue further? Thank you.

  1. Will the same error be reported when there is a failure with the ffmpeg rtsp source?
  2. Could you please elaborate on how to create the server with the docker and the pipeline you are using now?

Apologies, the previous claim is incorrect. The error is not from running the command, but from running a python script that is expected to be equivalent to the ffmpeg command but added a probe to extract frames, which works for real camera stream but not the ffmpeg-created rtsp stream. The failure does not have same error message, it is that the pipeline can not stream, it will get stuck after initialing and then run into segmentation fault.

Should it be followed up in a separate issue or continue with this one, to share the script and continue the debugging?

You can file a new topic about the new issue. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.