Failed in mem copy for Deepstream python app for 3 USB cameras

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson Orin AGX 64GB Developer Kit
• DeepStream Version
7.1
• JetPack Version (valid for Jetson only)
6.2
• TensorRT Version
10.3.0.30
• Issue Type( questions, new requirements, bugs)
bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I am running a python code for running deepstream to run inference on tensorRT with yolo11 onnx with 3 USB cameras. The applications runs quite smooth for several minutes until this error shows up

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy

ERROR: [TRT]: IExecutionContext::enqueueV3: Error Code 1: Cask (Cask convolution execution)
ERROR: Failed to enqueue trt inference batch
ERROR: Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:05:20.098908317 275052 0xaaaafa7bfb80 WARN                 nvinfer gstnvinfer.cpp:1420:gst_nvinfer_input_queue_loop:<primary-inference> error: Failed to queue input batch for inferencing
Error: gst-stream-error-quark: Failed to queue input batch for inferencing (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1420): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference
0:05:20.102308136 275052 0xffff44001c00 ERROR                nvinfer gstnvinfer.cpp:1267:get_converted_buffer:<primary-inference> cudaMemset2DAsync failed with error cudaErrorIllegalAddress while converting buffer
0:05:20.102399430 275052 0xffff44001c00 WARN                 nvinfer gstnvinfer.cpp:1576:gst_nvinfer_process_full_frame:<primary-inference> error: Buffer conversion failed
Unable to set device in gst_nvstreammux_src_collect_buffers

 *** Unable to set device in gst_nvvideoconvert_transform Line 3486
0:05:20.113443861 275052 0xffff440014c0 ERROR         nvvideoconvert gstnvvideoconvert.c:4280:gst_nvvideoconvert_transform: Set Device failed
Unable to set device in gst_nvstreammux_src_collect_buffers
Unable to set device in gst_nvstreammux_src_collect_buffers
Segmentation fault (core dumped)

The kernel message can be found here:

msg.txt (55.8 KB)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I did observe that when using 2 usb cameras, it is a lot lot more stable and can run for hours without it crashing. Also that when using 3 cameras, I did see that the footage in the NV3DSink is kinda laggy for the 3rd camera plugged in. But I did not see any error messages in dmesg complaining that the USB Bandwidth is not enough.

If you don’t use any deepstream elements and only use nv3dsink for rendering with these three cameras, can it run stably?
Just like: v4l2 → nvvvideoconvert → nv3dsink

I still think it has something to do with USB bandwidth. You can refer to this topic first.

There are some GPU errors in the kernel log. Did you upgrade the kernel when upgrading JP6.2? In fact, we have only tested DS-7.1 on JP6.1. You can try to completely re-burn JP6.1.

#!/usr/bin/env python3

import sys
import gi
gi.require_version('Gst', '1.0')
from gi.repository import GLib, Gst

def bus_call(bus, message, loop):
    """Callback for GStreamer bus messages"""
    t = message.type
    if t == Gst.MessageType.EOS:
        print("End-of-stream")
        loop.quit()
    elif t == Gst.MessageType.ERROR:
        err, debug = message.parse_error()
        print(f"Error: {err}, {debug}")
        loop.quit()
    return True

def main():
    # Initialize GStreamer
    Gst.init(None)
    
    # Create pipeline
    print("Creating pipeline")
    pipeline = Gst.Pipeline()
    if not pipeline:
        sys.stderr.write("Unable to create Pipeline\n")
        sys.exit(1)
    
    # Create v4l2src element
    print("Creating v4l2src")
    src = Gst.ElementFactory.make("v4l2src", "camera-source")
    if not src:
        sys.stderr.write("Unable to create v4l2src\n")
        sys.exit(1)
    
    # Set camera device (modify as needed)
    src.set_property('device', '/dev/video0')
    src.set_property('do-timestamp', True)
    
    # Create caps filter for v4l2src
    caps_filter = Gst.ElementFactory.make("capsfilter", "v4l2-caps")
    if not caps_filter:
        sys.stderr.write("Unable to create v4l2 caps filter\n")
        sys.exit(1)
    
    # Set caps properties - adjust resolution and framerate as needed
    caps_filter.set_property('caps',
        Gst.Caps.from_string("video/x-raw, width=1280, height=720, framerate=30/1"))
    
    # Create nvvideoconvert for GPU memory conversion
    nvconv = Gst.ElementFactory.make("nvvideoconvert", "nvvideoconvert")
    if not nvconv:
        sys.stderr.write("Unable to create nvvideoconvert\n")
        sys.exit(1)
    
    # Set nvvideoconvert properties
    nvconv.set_property('gpu-id', 0)
    
    # Create NVMM caps filter
    nvmm_caps = Gst.ElementFactory.make("capsfilter", "nvmm-caps")
    if not nvmm_caps:
        sys.stderr.write("Unable to create NVMM caps filter\n")
        sys.exit(1)
    
    # Set NVMM caps properties
    nvmm_caps.set_property('caps',
        Gst.Caps.from_string("video/x-raw(memory:NVMM)"))
    
    # Create sink element
    print("Creating nv3dsink")
    sink = Gst.ElementFactory.make("nv3dsink", "nv3d-sink")
    if not sink:
        sys.stderr.write("Unable to create nv3dsink\n")
        sys.exit(1)
    
    # Set sink properties
    sink.set_property('sync', False)
    
    # Add elements to pipeline
    print("Adding elements to pipeline")
    pipeline.add(src)
    pipeline.add(caps_filter)
    pipeline.add(nvconv)
    pipeline.add(nvmm_caps)
    pipeline.add(sink)
    
    # Link elements
    print("Linking elements")
    src.link(caps_filter)
    caps_filter.link(nvconv)
    nvconv.link(nvmm_caps)
    nvmm_caps.link(sink)
    
    # Create an event loop
    loop = GLib.MainLoop()
    
    # Add bus watch
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", bus_call, loop)
    
    # Start playing
    print("Starting pipeline")
    pipeline.set_state(Gst.State.PLAYING)
    
    try:
        loop.run()
    except KeyboardInterrupt:
        pass
    finally:
        # Cleanup
        pipeline.set_state(Gst.State.NULL)
        print("Pipeline stopped")

if __name__ == '__main__':
    main()

Even this code will fail after a while with error
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:341: => Failed in mem copy
I did a fresh install of JP6.2, so I think I’m on the latest kernel.

Hi, first of all, thanks for the reply.
I did a fresh install of JP6.1 with DS7.1 with SDK manager
Following:
DS_install_guide
Followed the installation of the pyds release
and my code looks like:

#!/usr/bin/env python3

import sys
import json
import math
sys.path.append('../')
import gi
gi.require_version('Gst', '1.0')
from gi.repository import GLib, Gst
from common.bus_call import bus_call

TILED_OUTPUT_WIDTH = 1280
TILED_OUTPUT_HEIGHT = 720

def create_camera_source_bin(camera_config):
    """
    Create a source bin for a USB camera with specified configuration
    """
    print(f"Creating source bin for camera {camera_config['device']}")
    
    # Create source bin elements
    bin_name = "source-bin"
    bin = Gst.Bin.new(bin_name)
    if not bin:
        sys.stderr.write(f" Unable to create source bin {bin_name}\n")
        return None
    
    # Create v4l2src element
    source = Gst.ElementFactory.make("v4l2src", "usb-cam-source")
    if not source:
        sys.stderr.write(f" Unable to create v4l2src for {camera_config['device']}\n")
        return None
    
    # Set source properties
    source.set_property('device', camera_config['device'])
    
    # Create caps filter for v4l2src
    caps_v4l2src = Gst.ElementFactory.make("capsfilter", "v4l2src-caps")
    if not caps_v4l2src:
        sys.stderr.write(" Unable to create v4l2src caps filter\n")
        return None
    
    # Set caps properties
    caps_v4l2src.set_property('caps',
        Gst.Caps.from_string(f"video/x-raw, width={camera_config['width']}, height={camera_config['height']}, framerate={camera_config['framerate']}/1"))
    
    # Create nvvideoconvert for GPU memory conversion
    nvvideoconvert = Gst.ElementFactory.make("nvvideoconvert", "nvvideoconvert")
    if not nvvideoconvert:
        sys.stderr.write(" Unable to create nvvideoconvert\n")
        return None
    
    # Set nvvideoconvert properties
    nvvideoconvert.set_property('gpu-id', 0)
    nvvideoconvert.set_property('src-crop', '0:0:' + str(camera_config['width']) + ':' + str(camera_config['height']))
    
    # Create capsfilter for NVMM
    caps_nvvideoconvert = Gst.ElementFactory.make("capsfilter", "nvmm-caps")
    if not caps_nvvideoconvert:
        sys.stderr.write(" Unable to create nvvideoconvert caps filter\n")
        return None
    
    # Set NVMM caps
    caps_nvvideoconvert.set_property('caps',
        Gst.Caps.from_string("video/x-raw(memory:NVMM)"))
    
    # Add all elements to bin
    bin.add(source)
    bin.add(caps_v4l2src)
    bin.add(nvvideoconvert)
    bin.add(caps_nvvideoconvert)
    
    # Link elements
    source.link(caps_v4l2src)
    caps_v4l2src.link(nvvideoconvert)
    nvvideoconvert.link(caps_nvvideoconvert)
    
    # Create ghost pad
    pad = caps_nvvideoconvert.get_static_pad("src")
    ghost_pad = Gst.GhostPad.new("src", pad)
    bin.add_pad(ghost_pad)
    
    return bin

def main():
    # Initialize GStreamer
    Gst.init(None)
    
    # Create Pipeline
    print("Creating Pipeline")
    pipeline = Gst.Pipeline()
    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline\n")
        sys.exit(1)
    
    # Create streamux
    print("Creating streamux")
    streammux = Gst.ElementFactory.make("nvstreammux", "stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux\n")
        sys.exit(1)
    
    # Create sink
    print("Creating nv3dsink")
    sink = Gst.ElementFactory.make("nv3dsink", "nv3d-sink")
    if not sink:
        sys.stderr.write(" Unable to create nv3dsink\n")
        sys.exit(1)
    
    # Add elements to pipeline
    pipeline.add(streammux)
    pipeline.add(sink)
    
    # Camera configuration
    camera_config = {
        "device": "/dev/video0",
        "width": 480,
        "height": 270,
        "framerate": 15,
        "format": "raw"
    }
    
    # Configure streammux
    streammux.set_property('width', camera_config['width'])
    streammux.set_property('height', camera_config['height'])
    streammux.set_property('batch-size', 1)  # Only one source
    streammux.set_property('live-source', 1)
    streammux.set_property('batched-push-timeout', 33000)
    streammux.set_property('async-process', 1)  # Enable async processing
    
    # No need for tiler with just one camera
    
    # Set sink properties
    sink.set_property('sync', False)
    
    # Create and link source bin
    print("Creating source bin")
    source_bin = create_camera_source_bin(camera_config)
    if not source_bin:
        sys.stderr.write(" Unable to create source bin\n")
        sys.exit(1)
        
    pipeline.add(source_bin)
    
    # Get sink pad from streammux
    sinkpad = streammux.request_pad_simple("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get sink pad\n")
        sys.exit(1)
        
    # Get source pad from source bin
    srcpad = source_bin.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad\n")
        sys.exit(1)
        
    # Link source to streammux
    srcpad.link(sinkpad)
    
    # Link streammux to sink (directly, no tiler needed)
    streammux.link(sink)
    
    # Create an event loop
    loop = GLib.MainLoop()
    
    # Add bus watch
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", bus_call, loop)
    
    # Start playing
    print("Starting pipeline")
    pipeline.set_state(Gst.State.PLAYING)
    
    try:
        loop.run()
    except:
        pass
    
    # Cleanup
    pipeline.set_state(Gst.State.NULL)
    print("Pipeline stopped")

if __name__ == '__main__':
    main()

Even just pulling video from camera and display it with the nv3dsink has this mem copy error after a while.

This issue does not seem to be caused by deepstream. In addition to the link above, you can discuss this issue in the Jetson AGX Orin forum

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.