Process and send multipe streams through IP network

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU) jetson orin nano
**• DeepStream Version 6.3

Good day, I am trying to process multiple streams on the jetson orin and send them through an IP network so that another computer can access the processed stream and view the video that is sent.

I am able to process one video on my jetson orin and send that through an IP network, but now I want to be able to process multiple videos and send them through the IP network, but I am not quite able to get that right yet. Below is a section of code that creates the server:

    server = GstRtspServer.RTSPServer.new()
    server.props.service = "%d" % 8554
    server.attach(None)

    nvstreamdemux = Gst.ElementFactory.make('nvstreamdemux', 'demux')
    pipeline.add(nvstreamdemux)
    queue.link(nvstreamdemux)
    queue = make_queue()
    pipeline.add(queue)  
    ip_address = get_ip_address()
    for streamID in range(number_of_streams):
        srcPadName = "src_%u" % streamID
        sinkPadName = "sink_%u" % streamID
        demuxsrcpad = nvstreamdemux.get_request_pad(srcPadName)
        streammux = Gst.ElementFactory.make('nvstreammux', f'streammux_{streamID}')    
        streammux.set_property('width', 1920)
        streammux.set_property('height', 1080)    
        streammux.set_property('batch-size', 1)
        streammux.set_property('batched-push-timeout', 40000)
        if is_live:
            streammux.set_property('live-source', 1)
        else:
            streammux.set_property('live-source', 0)

        pipeline.add(streammux)   
        streamMuxSinkPad = streammux.get_request_pad(sinkPadName)
        demuxsrcpad.link(streamMuxSinkPad)
        streammux.link(queue)
        
        nvosd = Gst.ElementFactory.make("nvdsosd", f"onscreendisplay_{streamID}")
        nvosd.set_property('process-mode',OSD_PROCESS_MODE)
        nvosd.set_property('display-text',OSD_DISPLAY_TEXT)

        pipeline.add(nvosd)
        queue.link(nvosd)
        queue = make_queue()
        pipeline.add(queue)
        nvosd.link(queue)

        nvvidconv_postosd = Gst.ElementFactory.make("nvvideoconvert", f"convertor_postosd_{streamID}")
        pipeline.add(nvvidconv_postosd)
        queue.link(nvvidconv_postosd)
        queue = make_queue()
        pipeline.add(queue) 
        nvvidconv_postosd.link(queue)
        if is_aarch64():
            caps1 = Gst.Caps.from_string("video/x-raw, format=I420")
            filter1 = Gst.ElementFactory.make("capsfilter", f"capsfilter_{streamID}")
            filter1.set_property("caps", caps1)
            pipeline.add(filter1)
            queue.link(filter1)
            queue = make_queue()
            pipeline.add(queue)    
            filter1.link(queue)
        if is_aarch64():
            encoder = Gst.ElementFactory.make("x264enc", f'encoder_{streamID}')
            encoder.set_property('speed-preset', 'medium')
            encoder.set_property('tune', 'zerolatency')
            encoder.set_property('pass', 'pass1')
            # encoder.set_property('bitrate', 999999)
            # encoder.set_property('bitrate', 999999)
        else:
            encoder = Gst.ElementFactory.make("nvv4l2h264enc", f'encoder_{streamID}')
            encoder.set_property('bitrate', 2000000)

        pipeline.add(encoder)
        queue.link(encoder)
        queue = make_queue()
        pipeline.add(queue)    
        encoder.link(queue)

        rtppay = Gst.ElementFactory.make("rtph264pay", f"rtppay_{streamID}")
        pipeline.add(rtppay)
        queue.link(rtppay)
        queue = make_queue()
        pipeline.add(queue)    
        rtppay.link(queue)   
    
        updsink_port_num=5400 
        codec='H264'
    
        sink = Gst.ElementFactory.make("udpsink", f"udpsink_{streamID}")
        sink.set_property('host', '224.224.255.255') #Subnet mask
        sink.set_property('port', updsink_port_num)
        sink.set_property('async', False)
        sink.set_property('sync', 1)
        sink.set_property("qos", 0)
        pipeline.add(sink)
        queue.link(sink)

        factory = GstRtspServer.RTSPMediaFactory.new()
        factory.set_launch( "( udpsrc name=pay0 port=%d buffer-size=524288 caps=\"application/x-rtp, media=video, clock-rate=90000, encoding-name=(string)%s, payload=96 \" )" % (updsink_port_num, codec))        
        factory.set_shared(True)
        server.get_mount_points().add_factory(f"/stream-{streamID}", factory)
        print(f"Stream {streamID} is reachable with this URL: rtsp://{ip_address}:{updsink_port_num}/stream-{streamID}")
    print('Then streaming service should be set up within the next 5 seconds')

When I am only processing one stream, then I am able to access the stream through this URL: “rtsp://10.0.0.34:5400/stream-0”. But when I process two streams, then I am not able to access the streams…

The streams are still being processed, but I am not able to access it so I suspect that the problem might be with how the server is created…

Here is the code for the “get_ip_address” function:

def get_ip_address():
# Create a socket object
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)

# Connect to an external server (doesn't actually send data)
s.connect(('8.8.8.8', 80))

# Get the local IP address
ip_address = s.getsockname()[0]

return ip_address

please refer to this sample code.
1.py (19.8 KB)

I made the adjustments based on the python file that you sent me, here is the result:

elif PROCESSING_SETTING == 2:#Process and send the data out via a IP network
    nvstreamdemux = Gst.ElementFactory.make('nvstreamdemux', 'demux')
    pipeline.add(nvstreamdemux)
    queue.link(nvstreamdemux)
    queue = make_queue()
    pipeline.add(queue)  
    for streamID in range(number_of_streams):
        srcPadName = "src_%u" % streamID
        sinkPadName = "sink_%u" % streamID
        demuxsrcpad = nvstreamdemux.get_request_pad(srcPadName)
        streammux = Gst.ElementFactory.make('nvstreammux', f'streammux_{streamID}')    
        streammux.set_property('width', 1920)
        streammux.set_property('height', 1080)    
        streammux.set_property('batch-size', 1)
        streammux.set_property('batched-push-timeout', 40000)
        if is_live:
            streammux.set_property('live-source', 1)
        else:
            streammux.set_property('live-source', 0)

        pipeline.add(streammux)   
        streamMuxSinkPad = streammux.get_request_pad(sinkPadName)
        demuxsrcpad.link(streamMuxSinkPad)
        streammux.link(queue)
        
        nvosd = Gst.ElementFactory.make("nvdsosd", f"onscreendisplay_{streamID}")
        nvosd.set_property('process-mode',OSD_PROCESS_MODE)
        nvosd.set_property('display-text',OSD_DISPLAY_TEXT)

        pipeline.add(nvosd)
        queue.link(nvosd)
        queue = make_queue()
        pipeline.add(queue)
        nvosd.link(queue)

        nvvidconv_postosd = Gst.ElementFactory.make("nvvideoconvert", f"convertor_postosd_{streamID}")
        pipeline.add(nvvidconv_postosd)
        queue.link(nvvidconv_postosd)
        queue = make_queue()
        pipeline.add(queue) 
        nvvidconv_postosd.link(queue)
        if is_aarch64():
            caps1 = Gst.Caps.from_string("video/x-raw, format=I420")
            filter1 = Gst.ElementFactory.make("capsfilter", f"capsfilter_{streamID}")
            filter1.set_property("caps", caps1)
            pipeline.add(filter1)
            queue.link(filter1)
            queue = make_queue()
            pipeline.add(queue)    
            filter1.link(queue)
        if is_aarch64():
            encoder = Gst.ElementFactory.make("x264enc", f'encoder_{streamID}')
            encoder.set_property('speed-preset', 'medium')
            encoder.set_property('tune', 'zerolatency')
            encoder.set_property('pass', 'pass1')
            # encoder.set_property('bitrate', 999999)
            # encoder.set_property('bitrate', 999999)
        else:
            encoder = Gst.ElementFactory.make("nvv4l2h264enc", f'encoder_{streamID}')
            encoder.set_property('bitrate', 2000000)

        pipeline.add(encoder)
        queue.link(encoder)
        queue = make_queue()
        pipeline.add(queue)    
        encoder.link(queue)

        rtppay = Gst.ElementFactory.make("rtph264pay", f"rtppay_{streamID}")
        pipeline.add(rtppay)
        queue.link(rtppay)
        queue = make_queue()
        pipeline.add(queue)    
        rtppay.link(queue)   
    
        updsink_port_num=5400 + streamID  
        rtsp_port_num=8554 + streamID  
        codec='H264'
    
        sink = Gst.ElementFactory.make("udpsink", f"udpsink_{streamID}")
        sink.set_property('host', '224.224.255.255') #Subnet mask
        sink.set_property('port', updsink_port_num)
        sink.set_property('async', False)
        sink.set_property('sync', 1)
        sink.set_property("qos", 0)
        pipeline.add(sink)
        queue.link(sink)

        server = GstRtspServer.RTSPServer.new()
        server.props.service = "%d" % rtsp_port_num
        server.attach(None)
        factory = GstRtspServer.RTSPMediaFactory.new()
        factory.set_launch( "( udpsrc name=pay0 port=%d buffer-size=524288 caps=\"application/x-rtp, media=video, clock-rate=90000, encoding-name=(string)%s, payload=96 \" )" % (updsink_port_num, codec))        
        factory.set_shared(True)
        server.get_mount_points().add_factory(f"/stream-{streamID}", factory)
        ip_address = get_ip_address()
        print(f"Stream {streamID} is reachable with this URL: rtsp://{ip_address}:{rtsp_port_num}/stream-{streamID}")
    print('Then streaming service should be set up within the next 5 seconds')
    return pipeline, [streammuxOG], [nvosd], None, ALL_BINS_v2 

When I run the python script with two streams, I get this error:

Error: gst-stream-error-quark: Internal data stream error. (1): gstqueue.c(988): gst_queue_handle_sink_event (): /GstPipeline:pipeline0/GstQueue:queue3:
streaming stopped, reason not-negotiated (-4)

Is it maybe because the “server = GstRtspServer.RTSPServer.new()” and the “factory = GstRtspServer.RTSPMediaFactory.new()” elements are not also provided a specific name, so when there is two streams that it must send, then the second server and factory object overwrites the first server and factory object? Here for example, is an object that is also assigned a specific name: “sink_output = Gst.ElementFactory.make(“fakesink”, f’sink_{0}')”

thanks for the sharing! I modified 1.py to 2.py. and did not observe that “ot-negotiated (-4)”. please refer to 2.py (19.1 KB).
please copy it to sample deepstream-demux-multi-in-multi-out. the test command-line is python3 2.py -i rtsp://xx rtsp://xx.

The script does not seem to be working. I moved the script to the deepstream-rtsp-in-rtsp-out file and ran it. Only a few frames are processed the it all stops. I added a print statement in the probe just to make sure that the streams are not being processed completely. Here is the output on my terminal:

Blockquotefeatures= <Gst.CapsFeatures object at 0xffff36cd4be0 (GstCapsFeatures at 0xfffee8002360)>
features= <Gst.CapsFeatures object at 0xffff36cd4d00 (GstCapsFeatures at 0xfffef401ca80)>
running
Frame Number= 0 Number of Objects= 5 Vehicle_count= 4 Person_count= 1
Frame Number= 0 Number of Objects= 5 Vehicle_count= 4 Person_count= 1
running
Frame Number= 1 Number of Objects= 5 Vehicle_count= 4 Person_count= 1
Frame Number= 1 Number of Objects= 5 Vehicle_count= 4 Person_count= 1
running
Frame Number= 2 Number of Objects= 6 Vehicle_count= 4 Person_count= 1
Frame Number= 2 Number of Objects= 6 Vehicle_count= 4 Person_count= 1
running
Frame Number= 3 Number of Objects= 6 Vehicle_count= 4 Person_count= 1
Frame Number= 3 Number of Objects= 6 Vehicle_count= 4 Person_count= 1
running
Frame Number= 4 Number of Objects= 5 Vehicle_count= 4 Person_count= 1
Frame Number= 4 Number of Objects= 5 Vehicle_count= 4 Person_count= 1
running
Frame Number= 5 Number of Objects= 6 Vehicle_count= 4 Person_count= 1
Frame Number= 5 Number of Objects= 6 Vehicle_count= 4 Person_count= 1
running
Frame Number= 6 Number of Objects= 5 Vehicle_count= 4 Person_count= 1
Frame Number= 6 Number of Objects= 5 Vehicle_count= 4 Person_count= 1

**PERF: {‘stream0’: 6.21, ‘stream1’: 6.21}

**PERF: {‘stream0’: 0.0, ‘stream1’: 0.0}

**PERF: {‘stream0’: 0.0, ‘stream1’: 0.0}

Sorry I made a mistake earlier. I thought that the streams were actually being processed, but during analysis I saw that the streams were actually not being processed.

So to clarify: When I try to stream one stream through the IP network, then it works, but when I try to process two or more streams and send it through an IP network, then not even one of the streams are processed at all.

testing two rtsp sources, I can’t reproduce this issue. please refer to log-1201.txt (26.1 KB)
maybe the application was generating new TensorRT engine for two sources. if you still can’t play after engines were created, please share the running log.

My input is two videos that are stored locally on my device. Here is the logs for when I use this command:

python deepstream_test1_rtsp_in_rtsp_out.py -i file:///home/aizatron/Deepstream_Resources/Videos/sampleqHD.mp4 file:///home/aizatron/Deepstream_Resources/Videos/sampleqHD.mp4

And here is the logs:
log.txt (5.1 KB)

Here is something else that I have noted:
Here is my current pipeline structure:

    nvstreamdemux = Gst.ElementFactory.make('nvstreamdemux', 'demux')
    pipeline.add(nvstreamdemux)
    queue.link(nvstreamdemux)
    queue = make_queue()
    pipeline.add(queue)  
    for streamID in range(number_of_streams):
        srcPadName = "src_%u" % streamID
        sinkPadName = "sink_%u" % streamID
        demuxsrcpad = nvstreamdemux.get_request_pad(srcPadName)
        streammux = Gst.ElementFactory.make('nvstreammux', f'streammux_{streamID}')    
        streammux.set_property('width', 1920)
        streammux.set_property('height', 1080)    
        streammux.set_property('batch-size', 1)
        streammux.set_property('batched-push-timeout', 40000)
        if is_live:
            streammux.set_property('live-source', 1)
        else:
            streammux.set_property('live-source', 0)

        pipeline.add(streammux)   
        streamMuxSinkPad = streammux.get_request_pad(sinkPadName)
        demuxsrcpad.link(streamMuxSinkPad)
        streammux.link(queue)
        
        nvosd = Gst.ElementFactory.make("nvdsosd", f"onscreendisplay_{streamID}")
        nvosd.set_property('process-mode',OSD_PROCESS_MODE)
        nvosd.set_property('display-text',OSD_DISPLAY_TEXT)

        pipeline.add(nvosd)
        queue.link(nvosd)
        queue = make_queue()
        pipeline.add(queue)
        nvosd.link(queue)

        nvvidconv_postosd = Gst.ElementFactory.make("nvvideoconvert", f"convertor_postosd_{streamID}")
        pipeline.add(nvvidconv_postosd)
        queue.link(nvvidconv_postosd)
        queue = make_queue()
        pipeline.add(queue) 
        nvvidconv_postosd.link(queue)

        if is_aarch64():
            caps1 = Gst.Caps.from_string("video/x-raw, format=I420")
            filter1 = Gst.ElementFactory.make("capsfilter", f"capsfilter_{streamID}")
            filter1.set_property("caps", caps1)
            pipeline.add(filter1)
            queue.link(filter1)
            queue = make_queue()
            pipeline.add(queue)    
            filter1.link(queue)
        if is_aarch64():
            encoder = Gst.ElementFactory.make("x264enc", f'encoder_{streamID}')
            encoder.set_property('speed-preset', 'medium')
            encoder.set_property('tune', 'zerolatency')
            encoder.set_property('pass', 'pass1')
            # encoder.set_property('bitrate', 999999)
            # encoder.set_property('bitrate', 999999)
        else:
            encoder = Gst.ElementFactory.make("nvv4l2h264enc", f'encoder_{streamID}')
            encoder.set_property('bitrate', 2000000)

        pipeline.add(encoder)
        queue.link(encoder)
        queue = make_queue()
        pipeline.add(queue)    
        encoder.link(queue)

        rtppay = Gst.ElementFactory.make("rtph264pay", f"rtppay_{streamID}")
        pipeline.add(rtppay)
        queue.link(rtppay)
        queue = make_queue()
        pipeline.add(queue)    
        rtppay.link(queue)   
    
        updsink_port_num=5400 + streamID  
        rtsp_port_num=8554 + streamID  
        codec='H264'
    
        sink = Gst.ElementFactory.make("udpsink", f"udpsink_{streamID}")
        sink.set_property('host', '224.224.255.255') #Subnet mask
        sink.set_property('port', updsink_port_num)
        sink.set_property('async', False)
        sink.set_property('sync', 1)
        sink.set_property("qos", 0)

        pipeline.add(sink)
        queue.link(sink)
        
        # SERVER_CONTAINER[streamID] = GstRtspServer.RTSPServer.new()
        # SERVER_CONTAINER[streamID].props.service = "%d" % rtsp_port_num
        # SERVER_CONTAINER[streamID].attach(None)
        
        # FACTORY_CONTAINER[streamID] = GstRtspServer.RTSPMediaFactory.new()
        # FACTORY_CONTAINER[streamID].set_launch( "( udpsrc name=pay0 port=%d buffer-size=524288 caps=\"application/x-rtp, media=video, clock-rate=90000, encoding-name=(string)%s, payload=96 \" )" % (updsink_port_num, codec))        
        # FACTORY_CONTAINER[streamID].set_shared(True)
        # SERVER_CONTAINER[streamID].get_mount_points().add_factory(f"/stream-{streamID}", FACTORY_CONTAINER[streamID])
        ip_address = get_ip_address()
        print(f"Stream {streamID} is reachable with this URL: rtsp://{ip_address}:{rtsp_port_num}/stream-{streamID}")
    print('Then streaming service should be set up within the next 5 seconds')
    return pipeline, [streammuxOG], [nvosd], None, ALL_BINS_v2 

Please note that the section that creates the server is commented out. I commented this section out because I suspected that this was the section that gave problems, but the same problem is still there. (The pipeline is able process one stream, but can not process more than one). This is weird because I do not see any reason why this would be a problem…Any advise?

could you provide the whole simplified code?
or you can modify deepstream-rtsp-in-rtsp-out by adding nvstreammux plugin and sending multiple udpsink.

Here is the structure of the entire pipeline:


I am going to try to modify the deepstream-rtsp-in-rtso-out to add the nvstreammux plugin as you suggested…

I suspect that the problem might be due to the “rtph264pay” plugin… When I made the sink element a “fakesink” plugin, then the same problem is still present, so the problem is not with the “udpsink” element.

please refer to this code and log. it works.
3.py (19.1 KB)
log-1204.txt (14.7 KB)

I tested this script and it works well. It seems like the streammux element in my pipeline is the problem (The streammux element that is created dynamically based on the number of streams that is going to be processed). I adjusted my pipeline and removed the streammux element that are created dynamically and then the streams are able to be processed. The problem is however that my probes needs that streammux elements to do some of the processing. So I need to have those streammux elements there. I am unsure why those streammux elements causes problems tho…

No worries, somehow I got it working. Not sure what I changed but I am able to access the streams that are sent. Thank you very much for the assistance.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.