Output Multiple RTSP Streams

Please provide complete information as applicable to your setup.
• Hardware Platform == GPU
• DeepStream Version == 6.2
• TensorRT Version == 8.5.2-1+cuda11.8
• NVIDIA GPU Driver Version == 525.105.17
• Issue Type == Question

I have a pipeline (see attached graph) that takes in 2 RTSP input streams does some processing and then should output the processed feeds with detections/tracks to 2 separate RTSP output streams.

To generate the output streams I use the following method for each stream (1 factory per output stream):

            # PREPARE RTSP OUTPUT SERVER
            rtspfactory = RtspServerFactory()
            _, _, rtsplinkout = rtspfactory.create_and_launch_server(index=ix, rtspport=rtsp_sink_port,
                                                                     sinkport=udp_sink_port,
                                                                     compression=output_compression,
                                                                     mountname=f'ds')
            # PREPARE RTSP OUTPUT SERVER

The code runs successfully and outputs the following information:

Creating RTSPServer with
rtspport: 8554
udpport: 5400
Launching factory with parameters:
(udpsrc name=pay0 port=5400 buffer-size=524288 caps="application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96" )

 *** DeepStream: Launched RTSP Streaming at rtsp://127.0.0.1:8554/ds ***
Creating RTSPServer with
rtspport: 8555
udpport: 5401
Launching factory with parameters:
(udpsrc name=pay1 port=5401 buffer-size=524288 caps="application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96" )

 *** DeepStream: Launched RTSP Streaming at rtsp://127.0.0.1:8555/ds ***

I am able to view the stream at rtsp://127.0.0.1:8554/ds, however, I am unable to open the stream running on rtsp://127.0.0.1:8555/ds .

Can anyone help me figure out what I am doing wrong/missing?

Edit: Here is the code for my RtspServerFactory

class RtspServerFactory:

    @classmethod
    def create_and_launch_server(cls, index: int, rtspport: int, sinkport: int, compression: str, mountname: str):
        # PREPARE RTSP OUTPUT SERVER
        # rtspport = 8554, 8555, ...
        # sinkport = 5400, 5401, ...
        # compression should be "H265" or "H264"
        # mountname = unique string
        print(f'Creating RTSPServer with\nrtspport: {rtspport}\nudpport: {sinkport}')
        name = f'pay{index}'
        server = GstRtspServer.RTSPServer.new()
        server.props.service = "%d" % rtspport
        server.attach(None)
        factory = GstRtspServer.RTSPMediaFactory.new()
        launch_params = f'(udpsrc name={name} port={sinkport} buffer-size=524288 caps="application/x-rtp, media=video, clock-rate=90000, encoding-name={compression}, payload=96" )'

        print(f'Launching factory with parameters:\n{launch_params}')
        factory.set_launch(launch_params)
        factory.set_shared(True)
        server.get_mount_points().add_factory(f"/{mountname}", factory)
        rtspoutlink = f'rtsp://127.0.0.1:{rtspport}/{mountname}'
        print(f"\n *** DeepStream: Launched RTSP Streaming at {rtspoutlink} ***\n\n")
        # PREPARE RTSP OUTPUT SERVER
        return server, factory, rtspoutlink

pipeline_graph_DEMUX.pdf (41.9 KB)

to narrow down this issue, please do the following check:

  1. if add a probe function on tppay_1’s src, is there any data come?
  2. is tcp port 8555 taken?
  3. if playing rtsp://127.0.0.1:8555/ds by VLC or ffplay? is there any error information?
  1. Probing rtppay
    This looks like it is indicating that the data is not getting through
    In the attached probe I’m just calling
    gst_buffer.get_size())

I see one rtppay src pad reporting
27
16
27
16
…repeating

and on the othe rtppay src pad I see
824
1063
1112
835
1023
969

  1. In my code if I start the RTSPFactory creating at 8555 I am able to successfully view an output on rtsp port 8555 with udp port 5001. Then the RTSPFactory created for 8556 and 5002 cannot be accessed. So it doesn’t seem like a port issue.

  2. Output log from VLC:

-- logger module started --
main: Running vlc with the default interface. Use 'cvlc' to use vlc without interface.
main: playlist is empty
main error: buffer deadlock prevented
live555 error: Nothing to play for rtsp://127.0.0.1:8555/ds
satip error: Failed to setup RTSP session
-- logger module stopped --
-- logger module started --
main: Running vlc with the default interface. Use 'cvlc' to use vlc without interface.
main: playlist is empty
main error: buffer deadlock prevented
main error: buffer deadlock prevented
main error: buffer deadlock prevented
live555 error: Nothing to play for rtsp://127.0.0.1:8555/ds
satip error: Failed to setup RTSP session
-- logger module stopped --

Here is a snippet of my code from where I link the nvstreamdemux with the sink

        for ix, camera_path in enumerate(camera_config_paths):
            # PREPARE OSD
            if USE_OSD:
                nvdosd_tracks = OSDTrackPipelineBin().create_osdtrack_bin(index=ix)
            else:
                nvdosd_tracks = None
            # PREPARE OSD

            srcpad_ix = nvstrdemux.get_request_pad(f"src_{ix}")
            if not srcpad_ix:
                sys.stderr.write("Unable to get the src pad of nvstrdemux\n")

            # CREATE RTSPoutputBIN
            output_compression = "H264"
            udp_sink_port = 5400+ix
            rtsp_sink_port = 8554+ix
            sink, _ = RtspSinkBin.create_rtsp_sink_bin(index=ix, compression=output_compression,
                                                       udpport=udp_sink_port, enable_probe=True)
            # CREATE RTSPoutputBIN

            if USE_OSD:
                pipeline.add(nvdosd_tracks)
                pipeline.add(sink)  # Add the RTSP sink

                nvdosd_tracks_sink_ix = nvdosd_tracks.get_static_pad("sink")
                if not nvdosd_tracks_sink_ix:
                    sys.stderr.write(f"Unable to get sink pad of {nvdosd_tracks_sink_ix.name} \n")
                srcpad_ix.link(nvdosd_tracks_sink_ix)  # Link the Demux pad with corresponding nvdosd_tracks pad

                nvdosd_tracks_src_ix = nvdosd_tracks.get_static_pad("src")
                if not nvdosd_tracks_src_ix:
                    sys.stderr.write(f"Unable to get src pad of {nvdosd_tracks_src_ix.name} \n")

                sinkpad_ix = sink.get_static_pad("sink")
                if not sinkpad_ix:
                    sys.stderr.write(f"Unable to get sink pad of {sink.name} \n")
                nvdosd_tracks_src_ix.link(sinkpad_ix)  # Link the Demux pad with corresponding the RTSP sink
                print("Linking elements in the Pipeline \n")

            else:
                pipeline.add(sink)  # Add the RTSP sink
                sinkpad_ix = sink.get_static_pad("sink")
                if not sinkpad_ix:
                    sys.stderr.write(f"Unable to get sink pad of {sink.name} \n")
                srcpad_ix.link(sinkpad_ix)  # Link the Demux pad with corresponding the RTSP sink
                print("Linking elements in the Pipeline \n")
            # PIPELINE ADDING AND LINKING

            # PREPARE RTSP OUTPUT SERVER
            rtspfactory = RtspServerFactory()
            _, _, rtsplinkout = rtspfactory.create_and_launch_server(index=ix, rtspport=rtsp_sink_port,
                                                                     sinkport=udp_sink_port,
                                                                     compression=output_compression,
                                                                     mountname=f'ds{ix}')
            # PREPARE RTSP OUTPUT SERVER

Edit: Here is a snippet of the code for creating the RTSP output bin

class RtspSinkBin:

    @classmethod
    def create_rtsp_sink_bin(cls, index: int, compression: str = "H264", udpport: int = 5400, enable_probe=False, format="RGBA"):
        # SINK BIN
        # Create a sink GstBin to abstract this bin's content from the rest of the pipeline
        bin_name = "sink-bin-%02d" % index
        print(bin_name)
        rtsp_sink_bin = Gst.Bin.new(bin_name)
        if not rtsp_sink_bin:
            logger.error(" Unable to create sink bin \n")

        # INTERNAL BIN ELEMENTS
        print(f"Creating nvvidconv_rtspsink{index} \n ")
        nvvidconv0 = Gst.ElementFactory.make("nvvideoconvert", f"nvvidconv_rtspsink{index}")
        if not nvvidconv0:
            sys.stderr.write(f" Unable to create nvvidconv_rtspsink{index} \n")
        nvvidconv0.set_property("nvbuf-memory-type", int(pyds.NVBUF_MEM_CUDA_UNIFIED))

        caps0 = Gst.ElementFactory.make("capsfilter", f"capsfilt_rtspsink{index}")
        caps0.set_property(
            "caps", Gst.Caps.from_string("video/x-raw(memory:NVMM), format=I420")
        )

        # Encodes H*** video
        rtpencode = None
        if compression == "H264":
            rtpencode = Gst.ElementFactory.make("nvv4l2h264enc", f"h26encode_{index}")
            debug("Creating H264 encoder")
        elif compression == "H265":
            rtpencode = Gst.ElementFactory.make("nvv4l2h265enc", f"h26encode_{index}")
            debug("Creating H265 encoder")
        if not rtpencode:
            sys.stderr.write(f"ERROR: Unable to create rtpencode_{index}")
            sys.exit(1)
        rtpencode.set_property('bitrate', 4000000)  # TODO: add as variable

        # Make the payload-encode video into RTP packets
        rtppay = None
        if compression == "H264":
            rtppay = Gst.ElementFactory.make("rtph264pay", f"rtppay_{index}")
            debug("Creating H264 rtppay")
        elif compression == "H265":
            rtppay = Gst.ElementFactory.make("rtph265pay", f"rtppay_{index}")
            debug("Creating H265 rtppay")
        if not rtppay:
            sys.stderr.write(f"ERROR: Unable to create rtppay_{index}")
            sys.exit(1)

        # Make the UDP sink | address from 224.0.0.0 to 239.255.255.255
        UDP_MULTICAST_ADDRESS = f'224.224.255.255'
        # UDP_MULTICAST_ADDRESS = f'230.0.0.{index}'
        UDP_MULTICAST_PORT = udpport
        sink = Gst.ElementFactory.make("udpsink", "udpsink")
        if not sink:
            sys.stderr.write(" Unable to create udpsink")
        sink.set_property('host', UDP_MULTICAST_ADDRESS)
        sink.set_property('port', UDP_MULTICAST_PORT)
        sink.set_property('async', False)
        sink.set_property("sync", 1)  # TODO: Is this right?

        # ADD ELEMENTS TO THE BIN
        Gst.Bin.add(rtsp_sink_bin, nvvidconv0)
        Gst.Bin.add(rtsp_sink_bin, caps0)
        Gst.Bin.add(rtsp_sink_bin, rtpencode)
        Gst.Bin.add(rtsp_sink_bin, rtppay)
        Gst.Bin.add(rtsp_sink_bin, sink)

        # LINK ELEMENTS
        nvvidconv0.link(caps0)
        caps0.link(rtpencode)
        rtpencode.link(rtppay)
        rtppay.link(sink)

        # BIN GHOST PAD
        # We need to create a ghost pad for the rtsp sink bin which will act as a sink pad
        rtsp_sink_bin.add_pad(
            Gst.GhostPad.new_no_target("sink", Gst.PadDirection.SINK)  # SINK?
        )
        ghost_pad = rtsp_sink_bin.get_static_pad("sink")
        if not ghost_pad:
            logger.error(" Failed to add ghost pad in source bin \n")
            return None
        ghost_pad.set_target(nvvidconv0.get_static_pad("sink"))

        if enable_probe:
            # ADD PERFPROBE
            rtsp_sink_pad = rtsp_sink_bin.get_static_pad("sink")
            if not rtsp_sink_pad:
                sys.stderr.write(" Unable to get rtsp_sink_pad \n")
            else:
                rtsp_sink_pad.add_probe(Gst.PadProbeType.BUFFER, rtsp_pad_buffer_probe, 0)
            # ADD PERFPROBE

        return rtsp_sink_bin, UDP_MULTICAST_PORT
  1. please check why there is no continuous data in rtppay_1, you can continue add probe function on upstream plugin, for example, nvv4l2decoder_0 or capsfilt_rtspsrc0.
  2. what is the GPU device model? can you play normally the two RTSP source? here is a sample command to play H265 RTSP source: gst-launch-1.0 rtspsrc location=rtsp://xx ! rtph265depay ! h265parse ! nvv4l2decoder ! nv3dsink
  1. After the Demux I can see that the data is passing properly to the RTSP bin sink pads (I can probe the sink and save the data as numpy arrays to confirm that I am seeing frames from the two RTSP sources.).
    The format of the RTSP bin I created can be seen in the graph PDF I attached in an earlier comment. I have probed each element of the pipeline inside the RTSPbin I created, it seems that the data passes properly up until the rtph264pay element.

This is the basic probe I setup on the rtph264pay element src:

def rtsp_pad_buffer_probe(pad: Gst.Pad, info, u_data):
    gst_buffer: Gst.Buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return None
    print(gst_buffer.get_size())
    return Gst.PadProbeReturn.OK

When I use the probe to measure the size of the data coming from the rtppay element I see the following for 1:
27
16
27
16
229
352
435
768
27
16
27
16
27
16
27
16
27
16
27
16
27
16

and this for the other rtppay element:
27
16
27
16
77
104
130
72
131
116
161
784
771
534
1223
994
966
27
16
351
1040
633
528
381
398
569
708
530
404
1311

Edit: Just confirming that I do seem to be able to measure data coming in to the rtppay elements on the sink pads.
For example I see similar results on both when measuring the buffer size:
13271
7295
6771
6044
8249
13129
14845
14498
205871
6288

  1. GPU is [GeForce GTX 1650 Mobile / Max-Q]
    Yes I can view and stream the RTSPs separately with my code- however when I try to do both at the same time I get the issue that I am facing.
    The code you shared worked, however, I’m not on Jetson so I don’t have access to nv3dsink. Replaced it with fakesink.

Do you have any recommendations for better ways to probe the data to verify things?
It would seem that there is an issue when I create multiple rtppay elements, what are your thoughts?

do you mean there is two ways continous data coming from rtppays’s src? on May 27, you said there is no continuous data in rtppay_1’src, please double check.

do you mean there is two ways continous data coming into rtppays’s sink?

  1. please check if encoding data is ok, you can replace “nvh264enc ! gstrtph264pay ! updsink” with “nvv4l2h264enc ! h264parse ! qtmux ! filesink location=./x.mp4”.
  2. if encoding data is ok, please check if upsink is ok. you can set upsink’s host, then check if the packets is sent by the network capture tool.

The data going through the two rtppay elements looks like this when I run the pipeline with two RTSP feeds:
The number is the size of the GSTbuffer measured with the probe I used

13271 → rtppay_src0[ rtppay_element_0 ]rtppay_sink0 → 1040

13129 → rtppay_src1[ rtppay_element_1 ]rtppay_sink1 → 16

I am saying that the data seems to be getting to both of the rtppay’s, however, on one of them there seems to be almost no data getting through.

Running this pipeline works:

gst-launch-1.0 -e rtspsrc location=rtsp://x ! rtph265depay ! h265parse ! nvv4l2decoder ! nvvideoconvert ! capsfilter ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=./x.mp4

I am able to stop the pipeline and then view the video recorded from the RTSP stream

  1. this is the key point, do you mean rtppay_1 continue to get data but can’t continue to output data while rtppay_0 can continue to output data?
  2. I means, you can replace “nvh264enc ! gstrtph264pay ! updsink” with “nvv4l2h264enc ! h264parse ! qtmux ! filesink location=./x.mp4” in python code, if this can work, it should be related to udpsink, if not it should be related to encoding.

can running two pipeline with different sources work at the same time?