Can pipeline use `mediamtx`media server

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU 3090
• DeepStream Version 7.1 DOCKER

in some python example use create_rtsp_server

i have create a media server use mediamtx

how the pipeline can link to the mediamtx server
now is

source.link(pgie)
pgie.link(queue_tee)
queue_tee.link(tee)
tee.link(queue_msgconv)
queue_msgconv.link(msgconv)
msgconv.link(queue_msgbroker)
queue_msgbroker.link(msgbroker)
tee.link(queue_tiler)
queue_tiler.link(converter)

converter.link(nvtiler)
nvtiler.link(nvosd)
nvosd.link(nvvidconv_encoder)
nvvidconv_encoder.link(filter_encoder)
filter_encoder.link(encoder)
encoder.link(rtppay)
rtppay.link(rtsp_sink)

another question rtsp pipeline is too long, can i Simplify it?

rtspclientsink= Gst.ElementFactory.make(“rtspclientsink”, “rtspclientsink1”)
rtspclientsink.set_property(“location”, “rtsp://localhost:8554/mystream”)

push to mediamtx

rtppay.link(rtspclientsink)

Watch the video

rtsp://localhost:8554/mystream
or
http://mediamtx_ip:8888/mystream
or
http://mediamtx_ip:8889/mystream

1 Like

The mediamtx author has shown the sample for how to push stream with GStreamer pipeline. bluenviron/mediamtx: Ready-to-use SRT / WebRTC / RTSP / RTMP / LL-HLS media server and media proxy that allows to read, publish, proxy, record and playback video and audio streams.

What do you mena by “too long”? Which elements do you want to remove from the pipeline?

1 Like

I want to know, are there any elements that are not necessary?

As to the RTSP send out function, the following plugins are necessary

The other plugins are for source, decoding, batching, inferencing, OSD,… You need to judge whether they are necessary for your requirement.

1 Like

if need to draw box in frame, and display by a RTSP , must set nvdsosd? nvmultistreamtiler Is also necessary?

Yes. nvdsosd is needed. And you need to choose one from “nvmultistreamtiler” and “nvstreamdemux” depends on how will you output the videos in batch. Please read the document Gst-nvmultistreamtiler — DeepStream documentation and Gst-nvstreamdemux — DeepStream documentation

1 Like

in deepstream_test1_rtsp_in_rtsp_out.py

i found nvvideoconvert -> nvmultistreamtiler ->nvmultistreamtiler -> nvosd ->xxxxx->rtsp

has two nvvideoconvert , Why do this?

if i use nvstreamdemux in multi RTSP out, how dose the pipeline looks like?

 sink_nvstreamdemux = nvstreamdemux.get_static_pad("sink")
    tee_rtsp_pad.link(sink_nvstreamdemux)

    for index, sensor in enumerate(args.sensors):
        # creating queue
        queue = make_element("queue", index)

        # creating nvvidconv
        nvvideoconvert = make_element("nvvideoconvert", index)


        # creating nvosd
        nvdsosd = make_element("nvdsosd", index)
        nvdsosd.set_property('process-mode', args.osd_process_mode)
        nvdsosd.set_property('display-text', args.osd_display_text)
        nvdsosd.set_property('display-bbox', 1)

        # connect nvstreamdemux -> queue
        padname = "src_%u" % index
        demuxsrcpad = nvstreamdemux.request_pad_simple(padname)
        if not demuxsrcpad:
            logger.error("Unable to create demux src pad \n")

        queuesinkpad = queue.get_static_pad("sink")
        if not queuesinkpad:
            logger.error("Unable to create queue sink pad \n")

        caps = make_element("capsfilter", index)
        caps.set_property("caps", Gst.Caps.from_string("video/x-raw(memory:NVMM), format=I420"))

        if args.encoder_codec == "H265":
            encoder = make_element("nvv4l2h265enc", index)
        elif args.encoder_codec == "H264":
            encoder = make_element("nvv4l2h264enc", index)
        encoder.set_property("bitrate", args.bitrate)
        encoder.set_property('tuning-info-id', 2)
        encoder.set_property('control-rate', 2)

        if args.encoder_codec == "H264":
            rtppay = make_element("rtph264pay", index)
        elif args.encoder_codec == "H265":
            rtppay = make_element("rtph265pay", index)

        rtspclientsink = make_element("rtspclientsink",index)
        rtspclientsink.set_property("location", f"{sensor.url}/results")

        pipeline.add(queue)
        pipeline.add(nvvideoconvert)
        pipeline.add(nvdsosd)
        pipeline.add(caps)
        pipeline.add(encoder)
        pipeline.add(rtppay)
        pipeline.add(rtspclientsink)

        demuxsrcpad.link(queuesinkpad)
        queue.link(nvvideoconvert)
        nvvideoconvert.link(nvdsosd)
        nvdsosd.link(caps)
        caps.link(encoder)
        encoder.link(rtppay)
        rtppay.link(rtspclientsink)

this seems wrong, and no error log output

It is a legacy pipeline. For the very old DeepStream version, nvdsosd only support RGB format input, the first nvvideoconvert will help to convert the yuv to rgb and the second nvvideoconvert will help to convert the rgb to yuv for encoding. For the latest DeepStream, there is no such limitation, the nvvideoconvert will do nothing if the input format and resolution is just the same as the output format and resolution. It is OK.

For the usage of nvstreamdemux, please refer to the sample in Gst-nvstreamdemux — DeepStream documentation

Why did you put capsfilter after nvdsosd?

in deepstream_test1_rtsp_in_rtsp_out.py

I wrote it based on this.

This is our code of deepstream_test1_rtsp_in_rtsp_out.py:

nvvidconv.link(nvosd)
    nvosd.link(nvvidconv_postosd)
    **nvvidconv_postosd.link(caps)**
    caps.link(encoder)
    encoder.link(rtppay)
    rtppay.link(sink)

The capsfilter is after nvvideoconvert.

It is important for you to debug and check with your code before you send any topic in the forum. It is not efficient to develop your own app in this way.

1 Like

We are not responsible for your own code. Please debug by yourself before you really identify the issue is caused by DeepStream SDK.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.