Failed in mem copy (DeepStream 7.1)

• Hardware Platform: Jetson Orin Nano
• DeepStream Version: 7.1
• JetPack Version: 6.2
• TensorRT Version: TensorRT v100300
• NVIDIA GPU Driver Version (valid for GPU only): 540.4.0
• Issue Type: ( questions, new requirements, bugs)

I added uridecodebin to the pipline to support the .mp4 input file in /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test1-rtsp-out/deepstream_test1_rtsp_out.py and ran it and it worked, but after a while I get this error:

python custom_scripts/deepstram_rtsp.py -i test-vids/Full-second-half.mp4
...
Frame Number=4936 Number of Objects=3 Vehicle_count=1 Person_count=2
Frame Number=4937 Number of Objects=3 Vehicle_count=2 Person_count=1
Frame Number=4938 Number of Objects=2 Vehicle_count=1 Person_count=1
Frame Number=4939 Number of Objects=3 Vehicle_count=2 Person_count=1
Frame Number=4940 Number of Objects=3 Vehicle_count=1 Person_count=2
Frame Number=4941 Number of Objects=3 Vehicle_count=1 Person_count=2
Frame Number=4942 Number of Objects=3 Vehicle_count=1 Person_count=2
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform_copy.cpp:438: => Failed in mem copy

ERROR: Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress
0:02:50.000062138 1226777 0xaaaacf356b60 WARN                 nvinfer gstnvinfer.cpp:2461:gst_nvinfer_output_loop:<primary-inference> error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
Error: gst-stream-error-quark: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(2461): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference
0:02:50.002739577 1226777 0xaaaacf356b60 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::releaseBatchOutput() <nvdsinfer_context_impl.cpp:1990> [UID = 1]: Tried to release an outputBatchID which is already with the context
Frame Number=4943 Number of Objects=0 Vehicle_count=0 Person_count=0
libnvosd (1386):(ERROR) : cuGraphicsEGLRegisterImage failed : 700 
0:02:50.004628411 1226777 0xaaaacf356b60 WARN                 nvinfer gstnvinfer.cpp:2423:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:02:50.004764672 1226777 0xaaaacf356b60 WARN                 nvinfer gstnvinfer.cpp:2423:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason error (-5)
ERROR: Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress
0:02:50.005021289 1226777 0xaaaacf356b60 WARN                 nvinfer gstnvinfer.cpp:2461:gst_nvinfer_output_loop:<primary-inference> error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
0:02:50.005158190 1226777 0xaaaacf356b60 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::releaseBatchOutput() <nvdsinfer_context_impl.cpp:1990> [UID = 1]: Tried to release an outputBatchID which is already with the context
Frame Number=4944 Number of Objects=0 Vehicle_count=0 Person_count=0
libnvosd (1386):(ERROR) : cuGraphicsEGLRegisterImage failed : 700 
ERROR: Failed to synchronize on cuda copy-coplete-event, cuda err_no:700, err_str:cudaErrorIllegalAddress
0:02:50.006478140 1226777 0xaaaacf356b60 WARN                 nvinfer gstnvinfer.cpp:2461:gst_nvinfer_output_loop:<primary-inference> error: Failed to dequeue output from inferencing. NvDsInferContext error: NVDSINFER_CUDA_ERROR
0:02:50.006616225 1226777 0xaaaacf356b60 WARN                 nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::releaseBatchOutput() <nvdsinfer_context_impl.cpp:1990> [UID = 1]: Tried to release an outputBatchID which is already with the context
Frame Number=4945 Number of Objects=0 Vehicle_count=0 Person_count=0
libnvosd (1386):(ERROR) : cuGraphicsEGLRegisterImage failed : 700 
ERROR: Failed to make stream wait on event, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:02:50.010553100 1226777 0xaaaacf3cc400 WARN                 nvinfer gstnvinfer.cpp:1420:gst_nvinfer_input_queue_loop:<primary-inference> error: Failed to queue input batch for inferencing
ERROR: Failed to make stream wait on event, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:02:50.010757715 1226777 0xaaaacf3cc400 WARN                 nvinfer gstnvinfer.cpp:1420:gst_nvinfer_input_queue_loop:<primary-inference> error: Failed to queue input batch for inferencing
Frame Number=4946 Number of Objects=0 Vehicle_count=0 Person_count=0
libnvosd (1386):(ERROR) : cuGraphicsEGLRegisterImage failed : 700 
Frame Number=4947 Number of Objects=0 Vehicle_count=0 Person_count=0
libnvosd (1386):(ERROR) : cuGraphicsEGLRegisterImage failed : 700 
ERROR: Failed to make stream wait on event, cuda err_no:700, err_str:cudaErrorIllegalAddress
ERROR: Preprocessor transform input data failed., nvinfer error:NVDSINFER_CUDA_ERROR
0:02:50.014653084 1226777 0xaaaacf3cc400 WARN                 nvinfer gstnvinfer.cpp:1420:gst_nvinfer_input_queue_loop:<primary-inference> error: Failed to queue input batch for inferencing
Frame Number=4948 Number of Objects=0 Vehicle_count=0 Person_count=0
libnvosd (1386):(ERROR) : cuGraphicsEGLRegisterImage failed : 700 
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79
CUDA Runtime error cudaFreeHost(host_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:78
CUDA Runtime error cudaFree(device_) # an illegal memory access was encountered, code = cudaErrorIllegalAddress [ 700 ] in file /dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvll_osd/memory.hpp:79

This error mainly occurs in videos longer than 30-40 seconds.

how can i fix it

here is full code that i am running:

#!/usr/bin/env python3

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2020-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

import argparse
import sys
sys.path.append('../')
import os

import gi
gi.require_version('Gst', '1.0')
gi.require_version('GstRtspServer', '1.0')
from gi.repository import GLib, Gst, GstRtspServer
from .common.platform_info import PlatformInfo
from .common.bus_call import bus_call

import pyds

PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3
MUXER_BATCH_TIMEOUT_USEC = 33000

def decodebin_pad_added_cb(decodebin, pad, user_data):
    streammux = user_data
    caps = pad.get_current_caps()
    if not caps:
        return
    caps_str = caps.to_string()
    if caps_str.startswith("video/"):
        sinkpad = streammux.request_pad_simple("sink_0")
        if sinkpad:
            pad.link(sinkpad)

def osd_sink_pad_buffer_probe(pad,info,u_data):
    frame_number=0
    #Intiallizing object counter with 0.
    obj_counter = {
        PGIE_CLASS_ID_VEHICLE:0,
        PGIE_CLASS_ID_PERSON:0,
        PGIE_CLASS_ID_BICYCLE:0,
        PGIE_CLASS_ID_ROADSIGN:0
    }
    num_rects=0

    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.NvDsFrameMeta.cast()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        frame_number=frame_meta.frame_num
        num_rects = frame_meta.num_obj_meta
        l_obj=frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                # Casting l_obj.data to pyds.NvDsObjectMeta
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break
            obj_counter[obj_meta.class_id] += 1
            try: 
                l_obj=l_obj.next
            except StopIteration:
                break

        # Acquiring a display meta object. The memory ownership remains in
        # the C code so downstream plugins can still access it. Otherwise
        # the garbage collector will claim it when this probe function exits.
        display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
        display_meta.num_labels = 1
        py_nvosd_text_params = display_meta.text_params[0]
        # Setting display text to be shown on screen
        # Note that the pyds module allocates a buffer for the string, and the
        # memory will not be claimed by the garbage collector.
        # Reading the display_text field here will return the C address of the
        # allocated string. Use pyds.get_string() to get the string content.
        py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])

        # Now set the offsets where the string should appear
        py_nvosd_text_params.x_offset = 10
        py_nvosd_text_params.y_offset = 12

        # Font , font-color and font-size
        py_nvosd_text_params.font_params.font_name = "Serif"
        py_nvosd_text_params.font_params.font_size = 10
        # set(red, green, blue, alpha); set to White
        py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

        # Text background color
        py_nvosd_text_params.set_bg_clr = 1
        # set(red, green, blue, alpha); set to Black
        py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
        # Using pyds.get_string() to get display_text as string
        print(pyds.get_string(py_nvosd_text_params.display_text))
        pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
			
    return Gst.PadProbeReturn.OK	


def main(args):
    platform_info = PlatformInfo()
    # Standard GStreamer initialization
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()
    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")
    
    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    # Create source depending on input type
    is_mp4 = os.path.splitext(stream_path)[1].lower() == ".mp4"
    if is_mp4:
        print("Creating URI Decode Bin for MP4 input\n")
        source = Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
        if not source:
            sys.stderr.write(" Unable to create uridecodebin \n")
        source.set_property("uri", GLib.filename_to_uri(stream_path, None))
        source.connect("pad-added", decodebin_pad_added_cb, streammux)
        h264parser = None
        decoder = None
    else:
        # H264 elementary stream path
        print("Creating Source \n ")
        source = Gst.ElementFactory.make("filesrc", "file-source")
        if not source:
            sys.stderr.write(" Unable to create Source \n")
        
        # Since the data format in the input file is elementary h264 stream,
        # we need a h264parser
        print("Creating H264Parser \n")
        h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
        if not h264parser:
            sys.stderr.write(" Unable to create h264 parser \n")
        
        # Use nvdec_h264 for hardware accelerated decode on GPU
        print("Creating Decoder \n")
        decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
        if not decoder:
            sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    # Use nvinfer to run inferencing on decoder's output,
    # behaviour of inferencing is set through config file
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")
    # Ensure inference runs on GPU 0
    try:
        pgie.set_property('gpu-id', 0)
    except Exception:
        pass
    
    # Use convertor to convert from NV12 to RGBA as required by nvosd
    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")
    
    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")
    # Run OSD on GPU to avoid EGL interop issues
    try:
        nvosd.set_property('process-mode', int(pyds.MODE_GPU))
        nvosd.set_property('gpu-id', 0)
    except Exception:
        pass
    nvvidconv_postosd = Gst.ElementFactory.make("nvvideoconvert", "convertor_postosd")
    if not nvvidconv_postosd:
        sys.stderr.write(" Unable to create nvvidconv_postosd \n")
    # Keep converters using default NVMM memory on Jetson
    try:
        nvvidconv.set_property('nvbuf-memory-type', 0)
        nvvidconv_postosd.set_property('nvbuf-memory-type', 0)
    except Exception:
        pass
    
    # Create a caps filter
    caps = Gst.ElementFactory.make("capsfilter", "filter")
    if enc_type == 0:
        caps.set_property("caps", Gst.Caps.from_string("video/x-raw(memory:NVMM), format=I420"))
    else:
        caps.set_property("caps", Gst.Caps.from_string("video/x-raw, format=I420"))
    
    # Make the encoder
    if codec == "H264":
        if enc_type == 0:
            encoder = Gst.ElementFactory.make("nvv4l2h264enc", "encoder")
        else:
            encoder = Gst.ElementFactory.make("x264enc", "encoder")
        print("Creating H264 Encoder")
    elif codec == "H265":
        if enc_type == 0:
            encoder = Gst.ElementFactory.make("nvv4l2h265enc", "encoder")
        else:
            encoder = Gst.ElementFactory.make("x265enc", "encoder")
        print("Creating H265 Encoder")
    if not encoder:
        sys.stderr.write(" Unable to create encoder")
    encoder.set_property('bitrate', bitrate)
    if platform_info.is_integrated_gpu() and enc_type == 0:
        encoder.set_property('preset-level', 1)
        encoder.set_property('insert-sps-pps', 1)
        #encoder.set_property('bufapi-version', 1)

    # Make the payload-encode video into RTP packets
    if codec == "H264":
        rtppay = Gst.ElementFactory.make("rtph264pay", "rtppay")
        print("Creating H264 rtppay")
    elif codec == "H265":
        rtppay = Gst.ElementFactory.make("rtph265pay", "rtppay")
        print("Creating H265 rtppay")
    if not rtppay:
        sys.stderr.write(" Unable to create rtppay")
    
    # Make the UDP sink
    updsink_port_num = 5400
    sink = Gst.ElementFactory.make("udpsink", "udpsink")
    if not sink:
        sys.stderr.write(" Unable to create udpsink")
    
    sink.set_property('host', '224.224.255.255')
    sink.set_property('port', updsink_port_num)
    sink.set_property('async', False)
    sink.set_property('sync', 1)
    
    print("Playing file %s " %stream_path)
    if not is_mp4:
        source.set_property('location', stream_path)
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', MUXER_BATCH_TIMEOUT_USEC)
    streammux.set_property('nvbuf-memory-type', 0)
    try:
        streammux.set_property('gpu_id', 0)
    except Exception:
        pass
    
    pgie.set_property('config-file-path', "/home/hbai/Documents/sky-metrics-ai-deepstream-yolo/dstest1_pgie_config.txt")
    
    print("Adding elements to Pipeline \n")
    pipeline.add(source)
    if not is_mp4:
        pipeline.add(h264parser)
        pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(pgie)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(nvvidconv_postosd)
    pipeline.add(caps)
    pipeline.add(encoder)
    pipeline.add(rtppay)
    pipeline.add(sink)

    # Link the elements together:
    # file-source -> h264-parser -> nvh264-decoder ->
    # nvinfer -> nvvidconv -> nvosd -> nvvidconv_postosd -> 
    # caps -> encoder -> rtppay -> udpsink
    
    print("Linking elements in the Pipeline \n")
    if not is_mp4:
        source.link(h264parser)
        h264parser.link(decoder)
        sinkpad = streammux.request_pad_simple("sink_0")
        if not sinkpad:
            sys.stderr.write(" Unable to get the sink pad of streammux \n")
        
        srcpad = decoder.get_static_pad("src")
        if not srcpad:
            sys.stderr.write(" Unable to get source pad of decoder \n")
        
        srcpad.link(sinkpad)
    streammux.link(pgie)
    pgie.link(nvvidconv)
    nvvidconv.link(nvosd)
    nvosd.link(nvvidconv_postosd)
    nvvidconv_postosd.link(caps)
    caps.link(encoder)
    encoder.link(rtppay)
    rtppay.link(sink)
    
    # create an event loop and feed gstreamer bus mesages to it
    loop = GLib.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)
    
    # Start streaming
    rtsp_port_num = 8560
    
    server = GstRtspServer.RTSPServer.new()
    server.props.service = "%d" % rtsp_port_num
    server.attach(None)
    
    factory = GstRtspServer.RTSPMediaFactory.new()
    factory.set_launch( "( udpsrc name=pay0 port=%d buffer-size=524288 caps=\"application/x-rtp, media=video, clock-rate=90000, encoding-name=(string)%s, payload=96 \" )" % (updsink_port_num, codec))
    factory.set_shared(True)
    server.get_mount_points().add_factory("/live", factory)
    
    print("\n *** DeepStream: Launched RTSP Streaming at rtsp://localhost:%d/live ***\n\n" % rtsp_port_num)
    
    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")
    
    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)
    
    # start play back and listen to events
    print("Starting pipeline \n")
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup
    pipeline.set_state(Gst.State.NULL)

def parse_args():
    parser = argparse.ArgumentParser(description='RTSP Output Sample Application Help ')
    parser.add_argument("-i", "--input",
                  help="Path to input file (.h264 elementary stream or .mp4)", required=True)
    parser.add_argument("-c", "--codec", default="H264",
                  help="RTSP Streaming Codec H264/H265 , default=H264", choices=['H264','H265'])
    parser.add_argument("-b", "--bitrate", default=4000000,
                  help="Set the encoding bitrate ", type=int)
    parser.add_argument("-e", "--enc_type", default=1,
                  help="0:Hardware encoder , 1:Software encoder , default=0", choices=[0, 1], type=int)
    # Check input arguments
    if len(sys.argv)==1:
        parser.print_help(sys.stderr)
        sys.exit(1)
    args = parser.parse_args()
    global codec
    global bitrate
    global stream_path
    global enc_type
    codec = args.codec
    bitrate = args.bitrate
    stream_path = args.input
    enc_type = args.enc_type
    return 0

if __name__ == '__main__':
    parse_args()
    sys.exit(main(sys.argv))

Could you refer to this topic and set the copy-hw=2 for all the nvvideoconvert plugins you used?

The solution you gave worked, but the stream quality dropped significantly.

It seems to be caused by packet loss of your udpsink. Can you directly display it on the monitor connected to your device?

I can’t display it on the monitor because I’m using Jetson via SSH.

I found the reason for this low quality, this happens if two such rtsp streams are made at the same time.

Is this because the device is weak to run multiple RTSP streams at the same time?

When there are multiple streams, your device will be loaded heavily, which can result in a large number of packet losses for your udpsink plugin.

You can try saving the output video locally for comparison.

Thanks for reply!

is it possible to use browser’s camera feed with DeepStream,

For example, the user enables camera in browser and connects their camera in the frontend web app and deepstream needs to output that connected camera feed as a rtsp stream.

thanks

You need to implement this yourself. As long as the output stream of your camera in browser is an RTSP stream, it is supported.