Multiple RTSP Output

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU):4070Ti
• DeepStream Version: 7.1

Hi @fanzh
I’m trying to get multiple RTSP output. I’m using the below code:

static void start_rtsp_streaming(GstRTSPServer *server, guint updsink_port_num,
                                 NvDsEncoderType enctype, char *mounts_point,
                                 guint64 udp_buffer_size)
{
    GstRTSPMountPoints *mounts;
    GstRTSPMediaFactory *factory;
    char udpsrc_pipeline[512];
    char *encoder_name;

    if (enctype == NV_DS_ENCODER_H264)
    {
        encoder_name = "H264";
    }
    else if (enctype == NV_DS_ENCODER_H265)
    {
        encoder_name = "H265";
    }
    else
    {
        g_print("%s failed", __func__);
        return;
    }

    if (udp_buffer_size == 0)
        udp_buffer_size = 512 * 1024;

    sprintf(udpsrc_pipeline,
            "( udpsrc name=pay0 port=%d buffer-size=%lu "
            "caps=\"application/x-rtp, media=video, "
            "clock-rate=90000, encoding-name=%s, payload=96 \" )",
            updsink_port_num, udp_buffer_size, encoder_name);

    mounts = gst_rtsp_server_get_mount_points(server);

    factory = gst_rtsp_media_factory_new();
    gst_rtsp_media_factory_set_shared(factory, TRUE);
    gst_rtsp_media_factory_set_launch(factory, udpsrc_pipeline);
    gst_rtsp_mount_points_add_factory(mounts, mounts_point, factory);

    g_object_unref(mounts);

    g_print("\n *** DeepStream: Launched RTSP Streaming at "
            "rtsp://localhost:8554%s ***\n\n",
            mounts_point);
}

GstElement* create_udpsink_bin(int index, guint udp_port,
                                NvDsEncMode enc_mode,
                                NvDsEncoderType enc_type)
{
    GstCaps* caps = nullptr;
    std::string elem_name;
    std::string encode_name;
    std::string rtppay_name;

    elem_name = "sink_sub_bin_" + std::to_string(index);
    GstElement* bin = gst_bin_new(elem_name.c_str());
    if (!bin) {
        g_print("Failed to create '%s'\n", elem_name.c_str());
        return nullptr;
    }

    // Queue
    elem_name = "sink_sub_bin_queue" + std::to_string(index);
    GstElement* queue = gst_element_factory_make("queue", elem_name.c_str());
    if (!queue) {
        g_print("Failed to create '%s'\n", elem_name.c_str());
        return nullptr;
    }

    // Transform
    elem_name = "sink_sub_bin_transform" + std::to_string(index);
    GstElement* transform = gst_element_factory_make("nvvideoconvert", elem_name.c_str());
    if (!transform) {
        g_print("Failed to create '%s'\n", elem_name.c_str());
        return nullptr;
    }

    // Caps filter
    elem_name = "sink_sub_bin_cap_filter" + std::to_string(index);
    GstElement* cap_filter = gst_element_factory_make("capsfilter", elem_name.c_str());
    if (!cap_filter) {
        g_print("Failed to create '%s'\n", elem_name.c_str());
        return nullptr;
    }

    encode_name = "sink_sub_bin_encoder" + std::to_string(index);
    rtppay_name = "sink_sub_bin_rtppay" + std::to_string(index);

    GstElement* codecparse = nullptr;
    GstElement* rtppay = nullptr;
    GstElement* encoder = nullptr;

    switch (enc_type) {
        case NV_DS_ENCODER_H264:
            codecparse = gst_element_factory_make("h264parse", "h264-parser");
            g_object_set(codecparse, "config-interval", 1, nullptr);
            rtppay = gst_element_factory_make("rtph264pay", rtppay_name.c_str());
            g_object_set(rtppay, "config-interval", 1, nullptr);
            encoder = (enc_mode == NV_DS_ENCODER_MODE_SW) ?
                      gst_element_factory_make("x264enc", encode_name.c_str()) :
                      gst_element_factory_make("nvv4l2h264enc", encode_name.c_str());

            if (!encoder && enc_mode != NV_DS_ENCODER_MODE_SW) {
                g_print("Could not create HW encoder. Falling back to SW encoder\n");
                encoder = gst_element_factory_make("x264enc", encode_name.c_str());
            }
            break;

        case NV_DS_ENCODER_H265:
            codecparse = gst_element_factory_make("h265parse", "h265-parser");
            g_object_set(codecparse, "config-interval", 1, nullptr);
            rtppay = gst_element_factory_make("rtph265pay", rtppay_name.c_str());
            g_object_set(rtppay, "config-interval", 1, nullptr);
            encoder = (enc_mode == NV_DS_ENCODER_MODE_SW) ?
                      gst_element_factory_make("x265enc", encode_name.c_str()) :
                      gst_element_factory_make("nvv4l2h265enc", encode_name.c_str());

            if (!encoder && enc_mode != NV_DS_ENCODER_MODE_SW) {
                g_print("Could not create HW encoder. Falling back to SW encoder\n");
                encoder = gst_element_factory_make("x265enc", encode_name.c_str());
            }
            break;

        default:
            return nullptr;
    }

    if (!encoder) {
        g_print("Failed to create encoder '%s'\n", encode_name.c_str());
        return nullptr;
    }

    // Set caps
    if (enc_mode == NV_DS_ENCODER_MODE_SW)
        caps = gst_caps_from_string("video/x-raw, format=I420");
    else
        caps = gst_caps_from_string("video/x-raw(memory:NVMM), format=NV12");

    g_object_set(cap_filter, "caps", caps, nullptr);

    if (!rtppay) {
        g_print("Failed to create '%s'\n", rtppay_name.c_str());
        return nullptr;
    }

    if (enc_mode == NV_DS_ENCODER_MODE_SW)
    {
        // g_object_set(encoder, "bitrate", 4000000, nullptr);
        g_object_set(encoder,
                     "bitrate", 5000, // kbps
                     "key-int-max", 25,
                     "speed-preset", 1,  // ultrafast
                     "tune", 0x00000004, // zerolatency
                     "byte-stream", TRUE,
                     nullptr);
    }
    else
    {
        // g_object_set(encoder, "bitrate", 5000000, "profile", 0, "iframeinterval", 25, nullptr);
        g_object_set(encoder,
                     "bitrate", 5000000, // bps
                     "iframeinterval", 25,
                     "profile", 0, // baseline
                     "insert-sps-pps", 1,
                     "control-rate", 1, // CBR
                     "preset-level", 1,
                     nullptr);
    }

    // GPU properties
    cudaDeviceProp prop;
    cudaGetDeviceProperties(&prop, 0);
    if (prop.integrated && enc_mode == NV_DS_ENCODER_MODE_SW) {
        g_object_set(encoder, "preset-level", 1, "insert-sps-pps", 1, "gpu-id", 0, nullptr);
    } else {
        g_object_set(transform, "gpu-id", 0, nullptr);
    }

    // UDP Sink
    elem_name = "sink_sub_bin_udpsink" + std::to_string(index);
    GstElement* sink = gst_element_factory_make("udpsink", elem_name.c_str());
    if (!sink) {
        g_print("Failed to create '%s'\n", elem_name.c_str());
        return nullptr;
    }

    g_object_set(sink, "host", "127.0.0.1", "port", udp_port, "async", FALSE, "sync", FALSE, nullptr);

    gst_bin_add_many(GST_BIN(bin), queue, cap_filter, transform, encoder,
                     codecparse, rtppay, sink, nullptr);

    if (!gst_element_link_many(queue, cap_filter, transform, encoder, codecparse,
                               rtppay, sink, nullptr)) {
        g_print("Failed to link elements in the bin\n");
        return nullptr;
    }

    GstPad* pad = gst_element_get_static_pad(queue, "sink");
    if (!pad) {
        g_print("Could not get sink pad from queue\n");
        return nullptr;
    }

    GstPad* ghost_pad = gst_ghost_pad_new("sink", pad);
    gst_pad_set_active(ghost_pad, TRUE);
    gst_element_add_pad(bin, ghost_pad);
    gst_object_unref(pad);

    if (caps)
        gst_caps_unref(caps);

    return bin;
}

The pipeline is as:

gst_bin_add_many(GST_BIN(pipeline), queue7, nvvidconv, queue3,
                     nvosd, queue4, demux, NULL);

    // Add another branch (must use dynamic linking again)
    GstPad *tee_src_pad2 = gst_element_request_pad_simple(tee_pre_osd, "src_%u");
    GstPad *queue7_sink_pad = gst_element_get_static_pad(queue7, "sink");

    if (gst_pad_link(tee_src_pad2, queue7_sink_pad) != GST_PAD_LINK_OK)
    {
        g_printerr("Failed to link tee to queue7. Exiting.\n");
        return -1;
    }
    gst_object_unref (queue7_sink_pad);
    gst_object_unref (tee_src_pad2);

    /* we link the elements together */
    if (!gst_element_link_many(queue7, nvvidconv, queue3,
                               nvosd, queue4, demux, NULL))
    {
        g_printerr("Elements could not be linked in rtsp. Exiting.\n");
        return -1;
    }
    for (i = 0; i < num_sources; i++)
    {
        gchar pad_name[16] = {};
        guint udp_port = 5400 + i;
        g_snprintf(pad_name, 15, "src_%u", i);
        g_snprintf(mounts_str, 15, "/ds-test%u", i);
        GstPad *srcpad = gst_element_request_pad_simple(demux, pad_name);
        if (!srcpad)
        {
            g_printerr("Failed to get src pad of demux. Exiting.\n");
            continue;
        }
        g_print("mounts_str = %s\n", mounts_str);
        GstElement *udpsink = create_udpsink_bin(i, udp_port, NV_DS_ENCODER_MODE_HW,
                                                 NV_DS_ENCODER_H264);
        gst_bin_add_many(GST_BIN(pipeline), udpsink, NULL);
        GstPad *sinkpad = gst_element_get_static_pad(udpsink, "sink");
        if (!sinkpad)
        {
            g_printerr("Failed to get sink pad of udpsink. Exiting.\n");
            continue;
        }
        gst_pad_link(srcpad, sinkpad);
        gst_object_unref(srcpad);
        gst_object_unref(sinkpad);
        start_rtsp_streaming(server, udp_port, NV_DS_ENCODER_H264, mounts_str, 0);
    }

    gst_rtsp_server_attach(server, NULL);


The problem is RTSP stream when played with ffplay or viewed shows lots of distortion.

ffplay shows the below :

[h264 @ 0x7fa03919a380] negative number of zero coeffs at 102 7/0   
[h264 @ 0x7fa03919a380] error while decoding MB 102 7
[h264 @ 0x7fa03919a380] concealing 7027 DC, 7027 AC, 7027 MV errors in P frame
[h264 @ 0x7fa0392dd180] Invalid level prefix097KB sq=    0B f=0/0   
[h264 @ 0x7fa0392dd180] error while decoding MB 85 20
[h264 @ 0x7fa0392dd180] concealing 5484 DC, 5484 AC, 5484 MV errors in P frame
[h264 @ 0x7fa038066280] corrupted macroblock 38 17 (total_coeff=-1) 
[h264 @ 0x7fa038066280] error while decoding MB 38 17
[h264 @ 0x7fa038066280] concealing 5891 DC, 5891 AC, 5891 MV errors in I frame
[h264 @ 0x7fa038a261c0] corrupted macroblock 67 28 (total_coeff=-1) 
[h264 @ 0x7fa038a261c0] error while decoding MB 67 28
[h264 @ 0x7fa038a261c0] concealing 4542 DC, 4542 AC, 4542 MV errors in P frame
[h264 @ 0x7fa03921c380] corrupted macroblock 31 22 (total_coeff=-1) 
[h264 @ 0x7fa03921c380] error while decoding MB 31 22
[h264 @ 0x7fa03921c380] concealing 5298 DC, 5298 AC, 5298 MV errors in P frame
[h264 @ 0x7fa038021c80] Invalid level prefix760KB sq=    0B f=0/0   
[h264 @ 0x7fa038021c80] error while decoding MB 7 51
[h264 @ 0x7fa038021c80] concealing 1842 DC, 1842 AC, 1842 MV errors in P frame
[h264 @ 0x7fa038da0ac0] corrupted macroblock 37 9 (total_coeff=-1)  
[h264 @ 0x7fa038da0ac0] error while decoding MB 37 9
[h264 @ 0x7fa038da0ac0] concealing 6852 DC, 6852 AC, 6852 MV errors in P frame
[h264 @ 0x7fa038da0ac0] corrupted macroblock 111 12 (total_coeff=-1)
[h264 @ 0x7fa038da0ac0] error while decoding MB 111 12
[h264 @ 0x7fa038da0ac0] concealing 6418 DC, 6418 AC, 6418 MV errors in P frame
[h264 @ 0x7fa03921c380] Invalid level prefix556KB sq=    0B f=0/0   
[h264 @ 0x7fa03921c380] error while decoding MB 49 31
[h264 @ 0x7fa03921c380] concealing 4200 DC, 4200 AC, 4200 MV errors in P frame
[h264 @ 0x7fa038021c80] concealing 7020 DC, 7020 AC, 7020 MV errors in I frame
[h264 @ 0x7fa038a261c0] corrupted macroblock 44 5 (total_coeff=-1)  
[h264 @ 0x7fa038a261c0] error while decoding MB 44 5
[h264 @ 0x7fa038a261c0] concealing 7325 DC, 7325 AC, 7325 MV errors in P frame
[h264 @ 0x7fa038a2b2c0] Invalid level prefix410KB sq=    0B f=0/0   
[h264 @ 0x7fa038a2b2c0] error while decoding MB 57 15
[h264 @ 0x7fa038a2b2c0] concealing 6112 DC, 6112 AC, 6112 MV errors in I frame
[h264 @ 0x7fa03862d080] negative number of zero coeffs at 77 36/0   
[h264 @ 0x7fa03862d080] error while decoding MB 77 36
[h264 @ 0x7fa03862d080] concealing 3572 DC, 3572 AC, 3572 MV errors in P frame

This is not acceptable for production scenario. Any help is highly appreciated.

To narrow down the issue, Is the output rtsp played well on the machine running the deepstream applicaton? If no monitor, you can record the stream into a fle, then use ffplay or other palyer to play.

gst-launch-1.0 rtspsrc location=XXX ! rtph264depay ! h264parse ! 'video/x-h264,stream-format=byte-stream' ! filesink location=test.h264

@fanzh Thank you for your response,
However, the problem persists even in the server(machine running the application). Sometimes it shows black screen like below:

What is the problem? Is this the problem with the pipeline? Can you please help as we have ordered Blackwell GPUs to deploy video analytics solution. Any help is highly appreciated.

Below is the pipeline once again:
```

1️⃣ INPUT + INFERENCE PIPELINE
┌──────────────┐
│ uridecodebin │  (N sources)
└──────┬───────┘
       │
       ▼
┌────────────────┐
│  nvstreammux   │  (batch)
└──────┬─────────┘
       │
       ▼
┌──────────────┐
│   nvinfer    │  (PGIE)
└──────┬───────┘
       │
       ▼
┌──────────────┐
│  nvtracker   │
└──────┬───────┘
       │
       ▼
┌──────────────┐
│     tee      │  ← tee_pre_osd
└──────┬───────┘


At this point the pipeline splits.

2️⃣ BRANCH A — LOGIC / MQTT / IMAGE SAVING
tee_pre_osd
   │
   ▼
┌────────┐
│ queue6 │
└───┬────┘
    ▼
┌──────────────┐
│ nvvideoconv  │
└────┬─────────┘
     ▼
┌──────────────┐
│   nvdsosd    │
└────┬─────────┘
     ▼
┌──────────┐
│ fakesink │
└──────────┘


✔ This branch is fine
✔ Heavy CPU / GPU / IO work happens here
✔ Does not affect decoding directly

3️⃣ BRANCH B — RTSP STREAMING (PROBLEM AREA)

This is where everything breaks.

tee_pre_osd
   │
   ▼
┌────────┐
│ queue7 │
└───┬────┘
    ▼
┌──────────────┐
│ nvvideoconv  │
└────┬─────────┘
     ▼
┌──────────────┐
│   nvdsosd    │
└────┬─────────┘
     ▼
┌──────────────┐
│ nvstreamdemux│
└────┬─────────┘
     │
     ├── src_0 ──► encoder ─► rtph264pay ─► udpsink ─► UDP
     │                                      ▲
     │                                      │
     ├── src_1 ──► encoder ─► rtph264pay ─► udpsink ─► UDP
     │                                      │
     │                                      │
     └── src_N ──► encoder ─► rtph264pay ─► udpsink ─► UDP
                                            │
                                            ▼
                                   ┌────────────────┐
                                   │    udpsrc      │
                                   └──────┬─────────┘
                                          ▼
                                   ┌────────────────┐
                                   │ RTSP server    │
                                   └──────┬─────────┘
                                          ▼
                                       ffplay

First DS7.1 does not suppot Blackwell. Please use DS8.0 instead.
Second, to rule out the encoding’s issue, If using “…->nvstreamdemux->nveglglessink”, Is the output video fine?
Thirdly, what is the source type? RTSP or local file?

@fanz Thanks. currently I’m testing in 4070Ti which is compatible for DS7.1. Currently the source file is local, but later it’ll be live rtsp stream. Finally how to link “..->nvstreamdemux->nveglglessink”?
I cannot understand the pipeline.

Also as u can see I’ve added nvosd before “demux” . Is this correct?

  1. Please refer to the following sample for how to link nvstreamdemux with nveglglessink. Please refer to the doc for the complete sample.
gst-launch-1.0 -e nvstreammux name=mux batch-size=2 width=1920 height=1080  ! nvstreamdemux name=demux ......  demux.src_0 ! queue ! nveglglessink  
  1. If linking ‘nvstreamdemux’ with ‘nveglglessink’ is fine, you can try adding one encoding branch to check if playing output rtsp on the server machine is fine.

@fanzh . Ok I will do it and report the problem. Thanks

@fanzh

Please see the below code I wrote as per the reference provided by you.

//     /* we add all elements into the pipeline */
    gst_bin_add_many(GST_BIN(pipeline), queue7, demux, NULL);

    // Add another branch (must use dynamic linking again)
    GstPad *tee_src_pad2 = gst_element_request_pad_simple(tee_pre_osd, "src_%u");
    GstPad *queue7_sink_pad = gst_element_get_static_pad(queue7, "sink");

    if (gst_pad_link(tee_src_pad2, queue7_sink_pad) != GST_PAD_LINK_OK)
    {
        g_printerr("Failed to link tee to queue7. Exiting.\n");
        return -1;
    }
    gst_object_unref (queue7_sink_pad);
    gst_object_unref (tee_src_pad2);

    /* we link the elements together */
    if (!gst_element_link_many(queue7, demux, NULL))
    {
        g_printerr("Elements could not be linked in rtsp. Exiting.\n");
        return -1;
    }
    for (guint i = 0; i < num_sources; i++)
    {

        gchar pad_name[16];
        g_snprintf(pad_name, sizeof(pad_name), "src_%u", i);

        g_print("Requesting demux pad %s\n", pad_name);

        GstPad *demux_src_pad =
            gst_element_request_pad_simple(demux, pad_name);

        if (!demux_src_pad)
        {
            g_printerr("Failed to request %s from demux\n", pad_name);
            continue;
        }

        GstElement *q = gst_element_factory_make("queue", NULL);
        GstElement *conv = gst_element_factory_make("nvvideoconvert", NULL);
        GstElement *sink = gst_element_factory_make("nveglglessink", NULL);

        if (!q || !conv || !sink)
        {
            g_printerr("Failed to create elements for %s\n", pad_name);
            gst_object_unref(demux_src_pad);
            continue;
        }

        g_object_set(q,
                     "leaky", 2,
                     "max-size-buffers", 5,
                     NULL);

        g_object_set(sink,
                     "sync", FALSE,
                     "qos", FALSE,
                     NULL);

        gst_bin_add_many(GST_BIN(pipeline), q, conv, sink, NULL);

        if (!gst_element_link_many(q, conv, sink, NULL))
        {
            g_printerr("Failed to link q -> conv -> sink for %s\n", pad_name);
            gst_object_unref(demux_src_pad);
            continue;
        }

        GstPad *q_sink_pad = gst_element_get_static_pad(q, "sink");

        if (gst_pad_link(demux_src_pad, q_sink_pad) != GST_PAD_LINK_OK)
        {
            g_printerr("Failed to link demux %s -> queue\n", pad_name);
        }
        else
        {
            g_print("demux %s -> queue -> conv -> egl OK\n", pad_name);
        }

        gst_object_unref(q_sink_pad);
        gst_object_unref(demux_src_pad);
    }

However elements are not linking:

~/c++/deepstream/ped_violation$ ./pedestrian_app ./models/ped_config.yml ./config.json 
Now playing : file:///home/eegrab/people.mp4
Now playing : file:///home/eegrab/people.mp4
Now playing : file:///home/eegrab/people.mp4
Now playing : file:///home/eegrab/people.mp4
Now playing : file:///home/eegrab/people.mp4
Now playing : file:///home/eegrab/people.mp4
Now playing : file:///home/eegrab/people.mp4
WARNING: Overriding infer-config batch-size (1) with number of sources (7)
Requesting demux pad src_0
demux src_0 -> queue -> conv -> egl OK
Requesting demux pad src_1

(pedestrian_app:316322): GStreamer-CRITICAL **: 15:42:19.350: gst_element_link_many: assertion 'GST_IS_ELEMENT (element_1)' failed
Failed to link q -> conv -> sink for src_1
Requesting demux pad src_2

(pedestrian_app:316322): GStreamer-CRITICAL **: 15:42:19.350: gst_element_link_many: assertion 'GST_IS_ELEMENT (element_1)' failed
Failed to link q -> conv -> sink for src_2
Requesting demux pad src_3
demux src_3 -> queue -> conv -> egl OK
Requesting demux pad src_4
demux src_4 -> queue -> conv -> egl OK
Requesting demux pad src_5
demux src_5 -> queue -> conv -> egl OK
Requesting demux pad src_6

(pedestrian_app:316322): GStreamer-CRITICAL **: 15:42:19.350: gst_element_link_many: assertion 'GST_IS_ELEMENT (element_1)' failed
Failed to link q -> conv -> sink for src_6
libEGL warning: DRI3: failed to query the version
libEGL warning: DRI2: failed to authenticate

In this case only 4 windows are opening with black screen with no osd drawing. Even the pipeline is stuck like deadlock situation which in rtsp case was not so. Can yo please advice.

The current pipeline ( new) for sink is as below:

tee
 └─ queue
     └─ nvstreamdemux          ✅ MUST be here
         ├─ src_0
         │    └─ queue
         │        └─ nvvideoconvert
         
         │                └─ encoder / sink
         │
         ├─ src_1
         │    └─ queue
         │        └─ nvvideoconvert
         
         │                └─ encoder / sink
         │
         └─ src_N

Any help is highly appreciated.

sorry for the late reply! the element needs to be link in sequence. please link nvstreamdemux with queue first, then link queue, nvvideoconvert and nveglglessink. Please also refer to the native sample deepstream/deepstream/sources/apps/sample_apps/deepstream-ucx-test for how to use nvstreamdemux.

Thank you @fanzh for your reply. I’ve linked in the same way as you have said. Please see the below log:

Requesting demux pad src_0
demux src_0 -> queue -> conv -> egl OK
Requesting demux pad src_1

(pedestrian_app:316322): GStreamer-CRITICAL **: 15:42:19.350: gst_element_link_many: assertion 'GST_IS_ELEMENT (element_1)' failed
Failed to link q -> conv -> sink for src_1
Requesting demux pad src_2

(pedestrian_app:316322): GStreamer-CRITICAL **: 15:42:19.350: gst_element_link_many: assertion 'GST_IS_ELEMENT (element_1)' failed
Failed to link q -> conv -> sink for src_2
Requesting demux pad src_3
demux src_3 -> queue -> conv -> egl OK
Requesting demux pad src_4
demux src_4 -> queue -> conv -> egl OK
Requesting demux pad src_5
demux src_5 -> queue -> conv -> egl OK
Requesting demux pad src_6

(pedestrian_app:316322): GStreamer-CRITICAL **: 15:42:19.350: gst_element_link_many: assertion 'GST_IS_ELEMENT (element_1)' failed
Failed to link q -> conv -> sink for src_6
libEGL warning: DRI3: failed to query the version
libEGL warning: DRI2: failed to authenticate

However, for src_2 and src_6 failed to link. Thus video not playing and stuck in a deadlock scenario. In the above log the last two lines indicated something about the sink. Any advise is highly useful. Thanks

As the log shown, element_1 is not an element, Please simplify the code to check why.

@fanzh . I checked, no black screen initially, However after 25000 frames black screen occurs for all the windows. I followed your advice and implemented the pipeline with “nveglglessink”. What might be the problem? Any help is highly appreciated. Here is the full pipeline code:

/* we add all elements into the pipeline */
    gst_bin_add_many(GST_BIN(pipeline), queue1, pgie, queue2, demux, NULL);
    /* we link the elements together */
    if (!gst_element_link_many(streammux, queue1, pgie, queue2, demux, NULL))
    {
        g_printerr("Elements could not be linked. Exiting.\n");
        return -1;
    }

    for (guint i = 0; i < num_sources; i++)
{
    gchar pad_name[32];
    g_snprintf(pad_name, sizeof(pad_name), "src_%u", i);

    g_print("Requesting demux pad %s\n", pad_name);

    GstPad *demux_src_pad =
        gst_element_request_pad_simple(demux, pad_name);

    if (!demux_src_pad)
    {
        g_printerr("Could not get %s from demux\n", pad_name);
        continue;
    }

    /* ---------- Unique element names ---------- */
    std::string q1_name   = "disp_queue1_" + std::to_string(i);
    std::string conv1_name= "disp_conv1_"  + std::to_string(i);
    std::string caps_name = "disp_caps_"   + std::to_string(i);
    std::string osd_name  = "disp_osd_"    + std::to_string(i);
    std::string conv2_name= "disp_conv2_"  + std::to_string(i);
    std::string sink_name = "disp_sink_"   + std::to_string(i);

    /* ---------- Elements ---------- */
    GstElement *q1   = gst_element_factory_make("queue", q1_name.c_str());
    GstElement *conv1= gst_element_factory_make("nvvideoconvert", conv1_name.c_str());
    GstElement *caps = gst_element_factory_make("capsfilter", caps_name.c_str());
    GstElement *osd  = gst_element_factory_make("nvdsosd", osd_name.c_str());
    GstElement *conv2= gst_element_factory_make("nvvideoconvert", conv2_name.c_str());
    GstElement *sink = gst_element_factory_make("nveglglessink", sink_name.c_str());

    if (!q1 || !conv1 || !caps || !osd || !conv2 || !sink)
    {
        g_printerr("Element creation failed for stream %u\n", i);
        gst_object_unref(demux_src_pad);
        continue;
    }

    /* ---------- Properties ---------- */
    g_object_set(q1,
                 "leaky", 2,
                 "max-size-buffers", 5,
                 NULL);

    GstCaps *rgba_caps =
        gst_caps_from_string("video/x-raw(memory:NVMM), format=RGBA");
    g_object_set(caps, "caps", rgba_caps, NULL);
    gst_caps_unref(rgba_caps);

    g_object_set(sink,
                 "sync", FALSE,
                 "qos", FALSE,
                 NULL);

    /* ---------- Add BEFORE linking ---------- */
    gst_bin_add_many(GST_BIN(pipeline),
                     q1, conv1, caps, osd, conv2, sink,
                     NULL);

    /* ---------- Link display chain ---------- */
    if (!gst_element_link_many(q1, conv1, caps, osd, conv2, sink, NULL))
    {
        g_printerr("Failed to link display chain for stream %u\n", i);
        gst_object_unref(demux_src_pad);
        continue;
    }

    /* ---------- demux → queue ---------- */
    GstPad *q1_sink_pad = gst_element_get_static_pad(q1, "sink");

    if (gst_pad_link(demux_src_pad, q1_sink_pad) != GST_PAD_LINK_OK)
    {
        g_printerr("demux %s → queue failed\n", pad_name);
    }
    else
    {
        g_print("demux %s → EGL display OK\n", pad_name);
    }

    gst_object_unref(q1_sink_pad);
    gst_object_unref(demux_src_pad);
}

Below is the config file:

streammux:
  batch-size: 1
  batched-push-timeout: 40000
  width: 1920
  height: 1056
  attach-sys-ts : 1
  live-source : 0

osd:
  process-mode: 1
  display-text: 1

#If there is ROI
analytics-config:
        #filename: config_nvdsanalytics.txt

triton:
  ## 0:disable 1:enable
  enable: 0
  ##0:trtion-native 1:triton-grpc
  type: 0
  ##car mode, 1:US car plate model|2: Chinese car plate model
  car-mode: 1

output:
  ## 1:file ouput  2:fake output 3:eglsink output
  type: 1
  ## 0: H264 encoder  1:H265 encoder
  enc: 0
  bitrate: 4000000
  ##The file name without suffix
  filename: ped_violation

primary-gie:
  ##For car detection
  config-file-path: ./pgie_config.yml
  unique-id: 1

  1. could you share a complete log? wondering if there is any abnormal log.
  2. Noticing nvmultistreamtiler is not used, are you testing only one source? How long are 25000 frames? is the source a long local file? to narrow down the issue, if only using "uridecodebin → nvstreammux → nvinfer-> nvtracker → nvvideoconv → nvdsosd ->nveglglessink ", is the output fine? Please refer to the following pipeline.
gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 ! qtdemux  ! h264parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 ! nvinfer config-file-path=dstest1_pgie_config.yml ! nvmultistreamtiler  ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvdsosd  ! nveglglessink  filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 ! qtdemux ! h264parse ! nvv4l2decoder !  mux.sink_1

@fanzh . Thank you so much for your time.

  1. I’m sharing the log of the DEBUG(5):
$ export GST_DEBUG=5
$ export GST_DEBUG_NO_COLOR=1
$ export GST_DEBUG_FILE=/tmp/deepstream.log
$ ./pedestrian_app ./models/ped_config.yml ./config.json

link: deepstream.log

2.I have multiple sources(7 currently, but it would be around 20 when in production) as you can see I’m using a for loop in the code. Tiler cannot be used as each stream needs a seperate window(front end needs). The video is 25 FPS, roughly after 31000 frames black screen comes on all the 7 windows. The source is local 10 minutes video. Same phenomenon happens even with 7 rtsp stream. I’ll see the output of the pipeline you said inform you.

Thanks once again

No , the output(7 streams) gets black screen even in tiler after 31000 frames. The application keeps on running with detection etc. Any help is high;y appreciated. Is it DS 7.1 issue?

Thank for the update! seems there is no error in deepstream.log.

  1. batch-size of nvstreammux and nvinfer should be set to the number of sources. if set to 1, there will be performance issue.
  2. to continue to narrow down the issue, if using “uridecodebin → nvstreammux → nvmultistreamtiler->nveglglessink”, will the issue remain? wondering which element causes the issue. You can set sync to 0 for nveglglessink, which means playing as soon as possible. Please refer to the following cmd.
gst-launch-1.0 uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1280 height=720 !  nvmultistreamtiler    ! nveglglessink sync=0  uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 !  mux.sink_1
  1. can each file be played well to the end with the following cmd?
gst-launch-1.0 uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4    ! nveglglessink sync=0