Multiple RTSP in & out with Deepstream PYTHON

**• Hardware Platform: ** GPU
• DeepStream Version: 5
• TensorRT Version: 7
• NVIDIA GPU Driver Version (valid for GPU only) 450
• Issue Type( questions, new requirements, bugs) questions
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) Deepstream Python

Hello, I’m a couple weeks new to Deepstream SDK. My app receives multiple RTSP streams and saves some cutout images using OpenCV with the osd probe.

It is based on the Python Sample: apps/­deepstream-imagedata-multistream

However, I would like to combine it with apps/­deepstream-test1-rtsp-out sample in order to get a RTSP output for every RTSP input received (and processed). I just can’t manage to make it work because RTSP out sample is made for only 1 input. So, my question is:

Is this possible? To batch frames from multiple streams, infer on them, and then send the frames through different RTSP streams?

I thought maybe udp sinks for every stream but I can’t find anything useful

Here’s my pipeline links:

Thanks a lot in advance.


For multiple streams input, please refer to the sample deepstream-test3. deepstream_python_apps/apps/deepstream-test3 at master · NVIDIA-AI-IOT/deepstream_python_apps (


Thanks a lot, I am able to read multiple streams, though.

My question is: is it possible to output multiple RTSP streams for each input stream, and not one stream with multiple tiles?

Sample: deepstream-test1-rtsp-out inputs only one stream and outputs only one stream.

I need to input 30 streams and output 30 different RTSP URLs.

1 Like

nvstreamdemux is for this puepose Gst-nvstreamdemux — DeepStream 6.1.1 Release documentation.

Unfortunately, there is no python sample for nvstreamdemux, you can refer to c/c++ samples and translate to python script. C/C++ Sample Apps Source Details — DeepStream 6.1.1 Release documentation

1 Like

I dont know if the GstRtspServer allows for multiple streams but if so maybe the below solution would work

factory1 ← udpsrc port 1234
factory1 ← udpsrc port 1235

factory1 =
factory1.set_launch( “cap string”


then use a tee or and interpipe after the osd element of a camera and send it to the udpsrc of the relevant factory

1 Like

For GstRtspServer, please refer to rtsp server (

One server for one stream. For every stream the UDP port and rtsp port also need to be unique.


Hello, are you using the code of a single RTSP input and output from the Deepstream Python version? Would you like to share it?

1 Like

Hello, thank you all for your quick responses. I still need some help, I feel like I’m about to solve it.

Here’s my progress:

Like @Andrew_Smith suggested, I’ve used “Gst-interpipe” and created a pipeline for streaming infering & processing, and another for udpsink & RTSP streaming.

Pipeline 1:
nvstreammux → pgie → tracker → nvvidconv1 → filter1 → nvvidconv → nv_osd → nvvidconv_post_osd → caps → nvv4l2h264enc → rtph264pay → interpipesink (name=output)

Pipeline 2 (3, 4, …N):
interpipesrc (listen-to=output) → udpsink → into GstRtspServer

This works correctly when I use only one streaming as input for Pipeline 1. However, when adding more the “nvv4l2h264enc” fails because it’s receiving the frames batch.

As @Fiona.Chen mentioned, adding Gst-nvstreamdemux is the answer for this. But when I add it to my pipeline 1 like this:

nvstreammux → pgie → tracker → nvvidconv1 → filter1 → nvvidconv → nv_osd → nvvidconv_post_osd → caps → Gst-nvstreamdemux → nvv4l2h264enc → rtph264pay → interpipesink (name=output)

I then stop receiving image in my Pipeline 2. Also I get this warning:
sys:1: Warning: g_object_get_is_valid_property: object class 'GstUDPSrc' has no property named 'pt'

I was thinking maybe the interpipesrc would have to listen-to something like: output.src_0 , output.src_1 but I cant manage to make it work.

Thanks a lot for your help.

Please use gst-launch to check your pipeline first. You can send us your gst-launch pipeline(without interpipesink, it is 3rd party proprietary software, we can not debug with it) so that we can reproduce the problem.

1 Like

I believe it’s not a debugging question. The pipeline doesn’t work because it’s not correctly structured. I need help structuring it.

Streamdemux separates the batch, but how do you manage each separated stream in order to set a different udpsink for each?

I just want to know the pipeline sequence for this use case in order to implement it in Python. (Receive 30 streams and output 30 rtsp)

What kind of management? There is pipeline sample in Gst-nvstreamdemux — DeepStream 6.1.1 Release documentation, please check case 1, case 2 and case 3 parts.

1 Like

Yes, please let me explain what I mean. For example, in case 3 it manages src_0 and src_1 from nvstreamdemux:
! mux.sink_1 demux.src_0 ! queue ! nvvideoconvert ! nveglglessink demux.src_1 ! queue ! nveglglessink

However, in Python when I make the UDP Sink, I have to set the host & port.
udpsink1 = Gst.ElementFactory.make("udpsink", "udpsink")
udpsink1.set_property('host', '')
udpsink1.set_property('port', 8554)
udpsink1.set_property('async', False)
udpsink1.set_property('sync', 1)

Then I do the linking of the pipeline:

As you can see, the output of nvstreamdemux is all going through the same udpsink. How do I make src_0, src_1, src_n to go through its own udpsink1, udpsink2, udpsink3? What other element do I need?

Thank you for your patience.

You need to create multiple udpsink instances for multiple demux src output.

Please use “gst-inspect-1.0 nvstreamdemux” to query the introduction of nvstreamdemux, you can see that the src pads of nvstreamdemux are “on-request”:
Pad Templates:
SINK template: ‘sink’
Availability: Always
format: { (string)NV12, (string)RGBA }
width: [ 1, 2147483647 ]
height: [ 1, 2147483647 ]
framerate: [ 0/1, 2147483647/1 ]

SRC template: ‘src_%u’
Availability: On request
format: { (string)NV12, (string)RGBA }
width: [ 1, 2147483647 ]
height: [ 1, 2147483647 ]
framerate: [ 0/1, 2147483647/1 ]

That means you can request nvstreamdemux src pad with assigned name just as what we have done with nvstreammux sink pads. And you can link the requested src pad with the sink pad of the udpsink you want to connect to this stream.

1 Like

It is more clear to refer to the c/c++ deepstream code. /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample-apps/deepstream-app/deepstream_app.c

1 Like

Oh okay! I see what you mean. Hadn’t thought about that. I’m going to try it and let you know. Thanks a lot. (19.3 KB)
hello @carloselilopezt
this is a simple python test code for 2 rtsp streams in and 2 rtsp streams out which not be optimized,you can have a try


Hello, @Fiona.Chen

In this example, do you combine streams from multi into one using NvBufferComposite?

Thank you.

I think this one deserves another thread, but i want to use tracker and MQTT. btw your file works properly in my jetson with rtsp:// and file:// links.
since it uses 2 stream paths, each sources has their url output. how do i differentiate those besides the link? For example doing MQTT for each sources

edit: link