Failed in mem copy with DeepStream 7.1

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Jetson AGX Orin
• DeepStream Version: 7.1
• JetPack Version (valid for Jetson only): 6.1 (also tried with 6.2)
• TensorRT Version: 10.3.0.26 (the one available in the deepstream docker image)
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs): Bugs

Hi !

I’m having an issue which has already been defined in this forum, but the topic is closed, and the solution provided doesn’t work on the Jetson device.

The issue I’m related to is named “Failed in mem copy” opened by kranok.

The issue is that when running a gstreamer pipeline with DeepStream 7.1, a cuda error pops up once in a while.

The initial error log is :

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy
ERROR: Failed to make stream wait on event, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
2025-08-26 10:45:29,256:live_inference:2634:ERROR:Gstreamer.Inference Pipeline: error gst-stream-error-quark: Failed to queue input batch for inferencing (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1420): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline0/GstNvInfer:primary. 

Sometimes, it’s also this error :

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy
ERROR: [TRT]: IExecutionContext::enqueueV3: Error Code 1: Cuda Driver (an illegal memory access was encountered)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
2025-08-26 10:52:37,520:live_inference:5123:ERROR:Gstreamer.Inference Pipeline: error gst-stream-error-quark: Failed to queue input batch for inferencing (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1420): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline0/GstNvInfer:primary.

But it’s probably the same root cause happening at a different step in the pipeline.

The pipeline we are running is quite dense, but the main inference part can be simplified like this:

gst-launch-1.0  filesrc location=/usr/app/assets/1647338042.694524_4_15.mp4 ! qtdemux name=demux ! h264parse ! nvv4l2decoder ! nvvideoconvert ! video/x-raw(memory:NVMM) ! dsmuxer.sink_0  nvstreammux name=dsmuxer batch-size=1 width=2048 height=1536 buffer-pool-size=200 ! nvvideoconvert ! queue name=infer_queue max-size-buffers=10 max-size-time=0 max-size-bytes=0 ! nvinfer config-file-path=/data/v2/16c76fad788743bd815f5965d7da17cc/models/primary/1.12.0_640x480/config.txt name=primary ! postprocessor name=postprocess1 ! mussp name=mussp_tracker ! fakesink sync=false 

There is some home-made gstreamer element in it but even running with off-the-shelf elements, the issue appears randomly, once in a while. The production pipeline runs with camera. It can sometimes work for 1h30 before failing or fail after a few seconds. With DeepStream 7.0, the pipeline runs for hours without any issues.

You mentioned to use

nvvideoconvert compute-hw=1 nvbuf-memory-type=3

But this is not supported on the Jetson:

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:4543: => Surface type not supported for transformation NVBUF_MEM_CUDA_UNIFIED

I never had any issue using DeepStream 7.0 or 6.3. I tried both JetPack 6.2 and 6.1.

Is there any solution for this ? Or wait for a new release ?

Thanks for your help.

Do you mean you can reproduce the issue with this pipeline?

Can the issue be reproduced when the mussp plugin is removed from the pipeline?

Yes we can reproduce the issue without the mussp element. Like :

gst-launch-1.0 filesrc location=/usr/app/assets/1647338042.694524_4_15.mp4 ! qtdemux name=demux ! h264parse ! nvv4l2decoder ! nvvideoconvert ! video/x-raw(memory:NVMM) ! dsmuxer.sink_0 nvstreammux name=dsmuxer batch-size=1 width=2048 height=1536 buffer-pool-size=200 ! nvvideoconvert ! queue name=infer_queue max-size-buffers=10 max-size-time=0 max-size-bytes=0 ! nvinfer config-file-path=/data/v2/16c76fad788743bd815f5965d7da17cc/models/primary/1.12.0_640x480/config.txt name=primary ! fakesink

It’s not happening so frequently with this pipeline, it’s more frequent when it’s streaming directly from a camera. I also realize that if the video is too short, the error doesn’t have time to pop up.

Can you tell us what kind of camera do you use? Can you post the pipeline with the camera too?

Yes, we use Lucid Vision cameras.

But, to clarify, the problem also occurs when using video files, so I would not look this road. I was simply saying that with a camera, since the pipeline never stops, there is more time for the error to appear. With a long video, the problem also occurs.

The problem occurs systematically, but the timing varies greatly. Sometimes after 10 seconds, sometimes after 1 hour. But the error always occurs within 2 hours (on the same pipeline).

There is some issue with JetPack 6.1/JetPack 6.2.

The work-around is to set the “compute-hw=1” properties settings with nvvideoconvert, the “nvbuf-memory-type=3” is not correct.

Ok I’ll try that and update the topic, thanks.

So far, it fixes the issue ! Running for 11 hours straight.
Thanks.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.