Output Multiple RTSP Streams

Please provide complete information as applicable to your setup.
• Hardware Platform == GPU
• DeepStream Version == 6.2
• TensorRT Version == 8.5.2-1+cuda11.8
• NVIDIA GPU Driver Version == 525.105.17
• Issue Type == Question

I have a pipeline (see attached graph) that takes in 2 RTSP input streams does some processing and then should output the processed feeds with detections/tracks to 2 separate RTSP output streams.

To generate the output streams I use the following method for each stream (1 factory per output stream):

            # PREPARE RTSP OUTPUT SERVER
            rtspfactory = RtspServerFactory()
            _, _, rtsplinkout = rtspfactory.create_and_launch_server(index=ix, rtspport=rtsp_sink_port,
                                                                     sinkport=udp_sink_port,
                                                                     compression=output_compression,
                                                                     mountname=f'ds')
            # PREPARE RTSP OUTPUT SERVER

The code runs successfully and outputs the following information:

Creating RTSPServer with
rtspport: 8554
udpport: 5400
Launching factory with parameters:
(udpsrc name=pay0 port=5400 buffer-size=524288 caps="application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96" )

 *** DeepStream: Launched RTSP Streaming at rtsp://127.0.0.1:8554/ds ***
Creating RTSPServer with
rtspport: 8555
udpport: 5401
Launching factory with parameters:
(udpsrc name=pay1 port=5401 buffer-size=524288 caps="application/x-rtp, media=video, clock-rate=90000, encoding-name=H264, payload=96" )

 *** DeepStream: Launched RTSP Streaming at rtsp://127.0.0.1:8555/ds ***

I am able to view the stream at rtsp://127.0.0.1:8554/ds, however, I am unable to open the stream running on rtsp://127.0.0.1:8555/ds .

Can anyone help me figure out what I am doing wrong/missing?

Edit: Here is the code for my RtspServerFactory

class RtspServerFactory:

    @classmethod
    def create_and_launch_server(cls, index: int, rtspport: int, sinkport: int, compression: str, mountname: str):
        # PREPARE RTSP OUTPUT SERVER
        # rtspport = 8554, 8555, ...
        # sinkport = 5400, 5401, ...
        # compression should be "H265" or "H264"
        # mountname = unique string
        print(f'Creating RTSPServer with\nrtspport: {rtspport}\nudpport: {sinkport}')
        name = f'pay{index}'
        server = GstRtspServer.RTSPServer.new()
        server.props.service = "%d" % rtspport
        server.attach(None)
        factory = GstRtspServer.RTSPMediaFactory.new()
        launch_params = f'(udpsrc name={name} port={sinkport} buffer-size=524288 caps="application/x-rtp, media=video, clock-rate=90000, encoding-name={compression}, payload=96" )'

        print(f'Launching factory with parameters:\n{launch_params}')
        factory.set_launch(launch_params)
        factory.set_shared(True)
        server.get_mount_points().add_factory(f"/{mountname}", factory)
        rtspoutlink = f'rtsp://127.0.0.1:{rtspport}/{mountname}'
        print(f"\n *** DeepStream: Launched RTSP Streaming at {rtspoutlink} ***\n\n")
        # PREPARE RTSP OUTPUT SERVER
        return server, factory, rtspoutlink

pipeline_graph_DEMUX.pdf (41.9 KB)

to narrow down this issue, please do the following check:

  1. if add a probe function on tppay_1’s src, is there any data come?
  2. is tcp port 8555 taken?
  3. if playing rtsp://127.0.0.1:8555/ds by VLC or ffplay? is there any error information?
  1. Probing rtppay
    This looks like it is indicating that the data is not getting through
    In the attached probe I’m just calling
    gst_buffer.get_size())

I see one rtppay src pad reporting
27
16
27
16
…repeating

and on the othe rtppay src pad I see
824
1063
1112
835
1023
969

  1. In my code if I start the RTSPFactory creating at 8555 I am able to successfully view an output on rtsp port 8555 with udp port 5001. Then the RTSPFactory created for 8556 and 5002 cannot be accessed. So it doesn’t seem like a port issue.

  2. Output log from VLC:

-- logger module started --
main: Running vlc with the default interface. Use 'cvlc' to use vlc without interface.
main: playlist is empty
main error: buffer deadlock prevented
live555 error: Nothing to play for rtsp://127.0.0.1:8555/ds
satip error: Failed to setup RTSP session
-- logger module stopped --
-- logger module started --
main: Running vlc with the default interface. Use 'cvlc' to use vlc without interface.
main: playlist is empty
main error: buffer deadlock prevented
main error: buffer deadlock prevented
main error: buffer deadlock prevented
live555 error: Nothing to play for rtsp://127.0.0.1:8555/ds
satip error: Failed to setup RTSP session
-- logger module stopped --

Here is a snippet of my code from where I link the nvstreamdemux with the sink

        for ix, camera_path in enumerate(camera_config_paths):
            # PREPARE OSD
            if USE_OSD:
                nvdosd_tracks = OSDTrackPipelineBin().create_osdtrack_bin(index=ix)
            else:
                nvdosd_tracks = None
            # PREPARE OSD

            srcpad_ix = nvstrdemux.get_request_pad(f"src_{ix}")
            if not srcpad_ix:
                sys.stderr.write("Unable to get the src pad of nvstrdemux\n")

            # CREATE RTSPoutputBIN
            output_compression = "H264"
            udp_sink_port = 5400+ix
            rtsp_sink_port = 8554+ix
            sink, _ = RtspSinkBin.create_rtsp_sink_bin(index=ix, compression=output_compression,
                                                       udpport=udp_sink_port, enable_probe=True)
            # CREATE RTSPoutputBIN

            if USE_OSD:
                pipeline.add(nvdosd_tracks)
                pipeline.add(sink)  # Add the RTSP sink

                nvdosd_tracks_sink_ix = nvdosd_tracks.get_static_pad("sink")
                if not nvdosd_tracks_sink_ix:
                    sys.stderr.write(f"Unable to get sink pad of {nvdosd_tracks_sink_ix.name} \n")
                srcpad_ix.link(nvdosd_tracks_sink_ix)  # Link the Demux pad with corresponding nvdosd_tracks pad

                nvdosd_tracks_src_ix = nvdosd_tracks.get_static_pad("src")
                if not nvdosd_tracks_src_ix:
                    sys.stderr.write(f"Unable to get src pad of {nvdosd_tracks_src_ix.name} \n")

                sinkpad_ix = sink.get_static_pad("sink")
                if not sinkpad_ix:
                    sys.stderr.write(f"Unable to get sink pad of {sink.name} \n")
                nvdosd_tracks_src_ix.link(sinkpad_ix)  # Link the Demux pad with corresponding the RTSP sink
                print("Linking elements in the Pipeline \n")

            else:
                pipeline.add(sink)  # Add the RTSP sink
                sinkpad_ix = sink.get_static_pad("sink")
                if not sinkpad_ix:
                    sys.stderr.write(f"Unable to get sink pad of {sink.name} \n")
                srcpad_ix.link(sinkpad_ix)  # Link the Demux pad with corresponding the RTSP sink
                print("Linking elements in the Pipeline \n")
            # PIPELINE ADDING AND LINKING

            # PREPARE RTSP OUTPUT SERVER
            rtspfactory = RtspServerFactory()
            _, _, rtsplinkout = rtspfactory.create_and_launch_server(index=ix, rtspport=rtsp_sink_port,
                                                                     sinkport=udp_sink_port,
                                                                     compression=output_compression,
                                                                     mountname=f'ds{ix}')
            # PREPARE RTSP OUTPUT SERVER

Edit: Here is a snippet of the code for creating the RTSP output bin

class RtspSinkBin:

    @classmethod
    def create_rtsp_sink_bin(cls, index: int, compression: str = "H264", udpport: int = 5400, enable_probe=False, format="RGBA"):
        # SINK BIN
        # Create a sink GstBin to abstract this bin's content from the rest of the pipeline
        bin_name = "sink-bin-%02d" % index
        print(bin_name)
        rtsp_sink_bin = Gst.Bin.new(bin_name)
        if not rtsp_sink_bin:
            logger.error(" Unable to create sink bin \n")

        # INTERNAL BIN ELEMENTS
        print(f"Creating nvvidconv_rtspsink{index} \n ")
        nvvidconv0 = Gst.ElementFactory.make("nvvideoconvert", f"nvvidconv_rtspsink{index}")
        if not nvvidconv0:
            sys.stderr.write(f" Unable to create nvvidconv_rtspsink{index} \n")
        nvvidconv0.set_property("nvbuf-memory-type", int(pyds.NVBUF_MEM_CUDA_UNIFIED))

        caps0 = Gst.ElementFactory.make("capsfilter", f"capsfilt_rtspsink{index}")
        caps0.set_property(
            "caps", Gst.Caps.from_string("video/x-raw(memory:NVMM), format=I420")
        )

        # Encodes H*** video
        rtpencode = None
        if compression == "H264":
            rtpencode = Gst.ElementFactory.make("nvv4l2h264enc", f"h26encode_{index}")
            debug("Creating H264 encoder")
        elif compression == "H265":
            rtpencode = Gst.ElementFactory.make("nvv4l2h265enc", f"h26encode_{index}")
            debug("Creating H265 encoder")
        if not rtpencode:
            sys.stderr.write(f"ERROR: Unable to create rtpencode_{index}")
            sys.exit(1)
        rtpencode.set_property('bitrate', 4000000)  # TODO: add as variable

        # Make the payload-encode video into RTP packets
        rtppay = None
        if compression == "H264":
            rtppay = Gst.ElementFactory.make("rtph264pay", f"rtppay_{index}")
            debug("Creating H264 rtppay")
        elif compression == "H265":
            rtppay = Gst.ElementFactory.make("rtph265pay", f"rtppay_{index}")
            debug("Creating H265 rtppay")
        if not rtppay:
            sys.stderr.write(f"ERROR: Unable to create rtppay_{index}")
            sys.exit(1)

        # Make the UDP sink | address from 224.0.0.0 to 239.255.255.255
        UDP_MULTICAST_ADDRESS = f'224.224.255.255'
        # UDP_MULTICAST_ADDRESS = f'230.0.0.{index}'
        UDP_MULTICAST_PORT = udpport
        sink = Gst.ElementFactory.make("udpsink", "udpsink")
        if not sink:
            sys.stderr.write(" Unable to create udpsink")
        sink.set_property('host', UDP_MULTICAST_ADDRESS)
        sink.set_property('port', UDP_MULTICAST_PORT)
        sink.set_property('async', False)
        sink.set_property("sync", 1)  # TODO: Is this right?

        # ADD ELEMENTS TO THE BIN
        Gst.Bin.add(rtsp_sink_bin, nvvidconv0)
        Gst.Bin.add(rtsp_sink_bin, caps0)
        Gst.Bin.add(rtsp_sink_bin, rtpencode)
        Gst.Bin.add(rtsp_sink_bin, rtppay)
        Gst.Bin.add(rtsp_sink_bin, sink)

        # LINK ELEMENTS
        nvvidconv0.link(caps0)
        caps0.link(rtpencode)
        rtpencode.link(rtppay)
        rtppay.link(sink)

        # BIN GHOST PAD
        # We need to create a ghost pad for the rtsp sink bin which will act as a sink pad
        rtsp_sink_bin.add_pad(
            Gst.GhostPad.new_no_target("sink", Gst.PadDirection.SINK)  # SINK?
        )
        ghost_pad = rtsp_sink_bin.get_static_pad("sink")
        if not ghost_pad:
            logger.error(" Failed to add ghost pad in source bin \n")
            return None
        ghost_pad.set_target(nvvidconv0.get_static_pad("sink"))

        if enable_probe:
            # ADD PERFPROBE
            rtsp_sink_pad = rtsp_sink_bin.get_static_pad("sink")
            if not rtsp_sink_pad:
                sys.stderr.write(" Unable to get rtsp_sink_pad \n")
            else:
                rtsp_sink_pad.add_probe(Gst.PadProbeType.BUFFER, rtsp_pad_buffer_probe, 0)
            # ADD PERFPROBE

        return rtsp_sink_bin, UDP_MULTICAST_PORT
  1. please check why there is no continuous data in rtppay_1, you can continue add probe function on upstream plugin, for example, nvv4l2decoder_0 or capsfilt_rtspsrc0.
  2. what is the GPU device model? can you play normally the two RTSP source? here is a sample command to play H265 RTSP source: gst-launch-1.0 rtspsrc location=rtsp://xx ! rtph265depay ! h265parse ! nvv4l2decoder ! nv3dsink
  1. After the Demux I can see that the data is passing properly to the RTSP bin sink pads (I can probe the sink and save the data as numpy arrays to confirm that I am seeing frames from the two RTSP sources.).
    The format of the RTSP bin I created can be seen in the graph PDF I attached in an earlier comment. I have probed each element of the pipeline inside the RTSPbin I created, it seems that the data passes properly up until the rtph264pay element.

This is the basic probe I setup on the rtph264pay element src:

def rtsp_pad_buffer_probe(pad: Gst.Pad, info, u_data):
    gst_buffer: Gst.Buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return None
    print(gst_buffer.get_size())
    return Gst.PadProbeReturn.OK

When I use the probe to measure the size of the data coming from the rtppay element I see the following for 1:
27
16
27
16
229
352
435
768
27
16
27
16
27
16
27
16
27
16
27
16
27
16

and this for the other rtppay element:
27
16
27
16
77
104
130
72
131
116
161
784
771
534
1223
994
966
27
16
351
1040
633
528
381
398
569
708
530
404
1311

Edit: Just confirming that I do seem to be able to measure data coming in to the rtppay elements on the sink pads.
For example I see similar results on both when measuring the buffer size:
13271
7295
6771
6044
8249
13129
14845
14498
205871
6288

  1. GPU is [GeForce GTX 1650 Mobile / Max-Q]
    Yes I can view and stream the RTSPs separately with my code- however when I try to do both at the same time I get the issue that I am facing.
    The code you shared worked, however, I’m not on Jetson so I don’t have access to nv3dsink. Replaced it with fakesink.

Do you have any recommendations for better ways to probe the data to verify things?
It would seem that there is an issue when I create multiple rtppay elements, what are your thoughts?

do you mean there is two ways continous data coming from rtppays’s src? on May 27, you said there is no continuous data in rtppay_1’src, please double check.

do you mean there is two ways continous data coming into rtppays’s sink?

  1. please check if encoding data is ok, you can replace “nvh264enc ! gstrtph264pay ! updsink” with “nvv4l2h264enc ! h264parse ! qtmux ! filesink location=./x.mp4”.
  2. if encoding data is ok, please check if upsink is ok. you can set upsink’s host, then check if the packets is sent by the network capture tool.

The data going through the two rtppay elements looks like this when I run the pipeline with two RTSP feeds:
The number is the size of the GSTbuffer measured with the probe I used

13271 → rtppay_src0[ rtppay_element_0 ]rtppay_sink0 → 1040

13129 → rtppay_src1[ rtppay_element_1 ]rtppay_sink1 → 16

I am saying that the data seems to be getting to both of the rtppay’s, however, on one of them there seems to be almost no data getting through.

Running this pipeline works:

gst-launch-1.0 -e rtspsrc location=rtsp://x ! rtph265depay ! h265parse ! nvv4l2decoder ! nvvideoconvert ! capsfilter ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=./x.mp4

I am able to stop the pipeline and then view the video recorded from the RTSP stream

  1. this is the key point, do you mean rtppay_1 continue to get data but can’t continue to output data while rtppay_0 can continue to output data?
  2. I means, you can replace “nvh264enc ! gstrtph264pay ! updsink” with “nvv4l2h264enc ! h264parse ! qtmux ! filesink location=./x.mp4” in python code, if this can work, it should be related to udpsink, if not it should be related to encoding.

can running two pipeline with different sources work at the same time?

Ahh OK, I have looked more in depth and recorded the input on each of the RTPpay src pads and the output on each of the RTPpay sink pads. I have attached a probe to each pad and measured the size of the Gst.Buffer coming in/out
payloads.csv (14.4 KB)
This is very odd to me.
There seems to be data coming in to rtppay0 on the sink pad but there is almost no data leaving from the src of rtppay0. For rtppay1 it seems that data is coming in to the sink pad and out of the src pad as one may expect.
The weird thing to me is that the RTSP output I am able to view is the one connected to rtppay0. It even seems like data output on rtppay0’s src stops (see the dict written to .csv)

Working on this.
Edit: I am able to edit my pipeline and instead of outputting rtsp write the frames to a filesink. I added the elements you specified and wrote an mp4 from each of the RTSP input streams. The output videos are correct- I can see the detections/tracking.

Yes this worked! I started up two separate pipelines each with a different RTSP stream and I am able to view the ouput RTSP generated at rtsp://127.0.0.1:8554/ds0 and rtsp://127.0.0.1:8555/ds0

if two mp4 files are correct, the issue should be related to “rtppay+udpsink”. please do the following tests to narrow down.

  1. in python code, let way 0 still use “nvh264enc ! gstrtph264pay ! updsink” , and let way 1 use “nvv4l2h264enc ! h264parse ! qtmux ! filesink location=./x.mp4”, will RTSP play ok and mp4 is correct?
  2. in python code, let way 0 still use “nvh264enc ! gstrtph264pay ! updsink” ,
    and let way 1 use “nvv4l2h264enc ! gstrtph264pay ! fakesink", then add a probe function on fakesink’s sink, will will RTSP play ok and probe function continue to get data?

When I use this setup and allow 0 to create an RTSP output and 1 to save to an .mp4 the pipeline works properly.
When I use this setup and allow 0 to save to an .mp4 and 1 to create an RTSP output I am unable to view the RTSP output but the mp4 video is saved properly.
I have tested with different RTSP input streams and get the same issue.

When I use this setup and allow 0 to create an RTSP output and 1 to connect to a fakesink with a probe I can view the output RTSP and I see data getting to the sink pad of the fakesink.
When I use this setup and allow 0 to connect to a fakesink with a probe and 1 to create an RTSP output I am unable to view the RTSP and I am able to see the fakesink getting data…


It seems that I can only get a successful output RTSP stream when I connect the first src pad (src pad_0) of the nvstreamdemux to an rtspbin’s sink? No RTSP output has worked when connected to any nvstreamdemux src pad other than src pad 0


       ...

        nvstrdemux = Gst.ElementFactory.make("nvstreamdemux", "nvstrdemux")
        if not nvstrdemux:
            sys.stderr.write(" Unable to create nvstrdemux \n")

        # PIPELINE ADDING AND LINKING
        print("Adding elements to Pipeline \n")
        pipeline.add(pgie)
        pipeline.add(infer_queue)
        pipeline.add(tracker)
        pipeline.add(nvstrdemux)

        nvstrmux.link(pgie)
        pgie.link(infer_queue)
        infer_queue.link(tracker)
        tracker.link(nvstrdemux)

        for ix, camera_path in enumerate(camera_config_paths):

            srcpad_ix = nvstrdemux.get_request_pad(f"src_{ix}")
            if not srcpad_ix:
               sys.stderr.write("Unable to get the src pad of nvstrdemux\n")

           ....

         

dose fakesink get data continuously?

please do a test to verify. you can only use nvstreamdemux’s srcpad1, don’t use srcpad0, will the RTSP output play well?

Yes the fake sinks seem to get data, is that what you mean? I am able to take the Gst.Buffer and extract it to a numpy array etc.

I’ll have a look more closely at testing the src pads on the nvstreamdemux.

can you share the original code by private email?

OK, if I use nvstreamdemux srcpad1 then my first RTSPfactory now points to the second RTSP feed. Before when connecting to srcpad0 I would be able to view the other RTSP feed…this makes sense I think, however, I still don’t get a succesful RTSP output feed from the other RTSP source.

I dm’d you the Python code.

1.py (19.8 KB)
please refer to this code based on deepstream-demux-multi-in-multi-out, it will output two rtsp streams, and both plays well.

When I run this code with two different RTSPs

python3.8 1.py -i rtsp://0 rtsp://1

I am able to view two output RTSP streams through VLC, however, both output RTSPs point to the same processed input stream (both point to the feed from rtsp://0).

 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***


 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8555/ds-test ***

Is it possible there is something going on with nvstreamdemux that is causing this issue?

EDIT
Editing my code to create the RTSPMediaFactory objects outside of the class method I was using allows me to successfully view two output RTSP streams.
However, the problem still persists that the two output RTSP streams are not unique (the same data is shown on both output streams).
Can you please try with two unique RTSP streams and confirm that the outputs are unique:

 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***


 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8555/ds-test ***
  1. “python3 deepstream_demux_multi_in_multi_out.py -i rtsp://127.0.0.1:8001/test rtsp://127.0.0.1:8002/test” will output two different video, you can check how to use nvstreamdemux.
  2. can you dump the pipeline to check? here is the method: Python DeepStream program not generating dot file

I messaged you some screenshots to show you the issue I have. Whatever RTSP I put first after ‘python 1.py -i …’ gets replicated twice on the outputs (shown in photos). I’m not sure what you mean ‘check how to use nvstreamdemux’, the usage in my own code was from an NVIDIA example, and I am facing this issue with the code that you sent me?

Attached the pipeline pdf here- Yes it looks like it should work but the output is incorrect on my end.pipeline_graph.pdf (34.1 KB)

here is a bug in 1.py, it works fine after adding this fix.
server1.get_mount_points().add_factory(“/ds-test”, factory) —> server1.get_mount_points().add_factory(“/ds-test”, factory1)