Nvurisrcbin rtsp reconnection now working deepstream 7.1 service maker

Please provide complete information as applicable to your setup.

• dGPU
• DeepStream Version 7.1
• Docker image
• Bug

I am using Deepstream 7.1 service maker in C++. I am using this pipeline with RTSP streams from mediaMTX. When I restart mediaMTX, the reconnection does not work, even though it is configured at the nvurisrcbin level.
Note that a patch is also included to fix a known memory leak issue, as described here: Memory leak in deepstream-test4 python - #3 by junshengy

Please feel free to review my C++ code attached if you wish.

/*
 * SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
 * SPDX-License-Identifier: LicenseRef-NvidiaProprietary
 *
 * NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
 * property and proprietary rights in and to this material, related
 * documentation and any modifications thereto. Any use, reproduction,
 * disclosure or distribution of this material and related documentation
 * without an express license agreement from NVIDIA CORPORATION or
 * its affiliates is strictly prohibited.
 */

#include <iostream>
#include <string>
#include <fstream>
#include <vector>
#include <nlohmann/json.hpp>

#include "pipeline.hpp"

// Dynamically read muxer resolution from environment variables
int get_muxer_width() {
    const char* env = std::getenv("MUXER_WIDTH");
    return env ? std::stoi(env) : 960;
}

int get_muxer_height() {
    const char* env = std::getenv("MUXER_HEIGHT");
    return env ? std::stoi(env) : 544;
}

#define PGIE_CONFIG_FILE "/opt/nvidia/deepstream/deepstream/deepstreamInterface/models/detection_and_classification_models/people_nvidia_detector/native_client_config.txt"
#define TRACKER_LL_CONFIG_FILE "/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_max_perf.yml"
#define TRACKER_LL_LIB_FILE "/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so"
#define ANALYTICS_CONFIG_FILE "/opt/nvidia/deepstream/deepstream/service-maker/sources/apps/cpp/deepstream_app/configs/analytics_config.txt"
#define MSGCONV_CONFIG_FILE "/opt/nvidia/deepstream/deepstream/service-maker/sources/apps/cpp/deepstream_app/configs/msgconv_config.txt"
#define MSGCONV_PROTO_LIB "/opt/nvidia/deepstream/deepstream/sources/libs/nvmsgconv/libnvds_msgconv.so"
#define MSGBROKER_CONN_STR "host.docker.internal;9093"
#define MGSBROKER_TOPIC "raw-nvidia-peoplenet-RawPeopleNetEvent"
#define MSGBROKER_PROTO_LIB "/opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so"
#define MSGBROKER_CONFIG_FILE "/opt/nvidia/deepstream/deepstream/service-maker/sources/apps/cpp/deepstream_app/configs/msgbroker_config.txt"

using namespace deepstream;

int main (int argc, char *argv[])
{
  try {

    // Default: use PIPELINE_SCHEMA env var
    const char* env_schema = std::getenv("PIPELINE_SCHEMA");
    if (!env_schema) {
      std::cerr << "PIPELINE_SCHEMA environment variable is not set and no YAML file provided." << std::endl;
      return 1;
    }
    nlohmann::json schema_json = nlohmann::json::parse(env_schema);
    std::vector<std::string> uris;
    for (const auto& component : schema_json["components"]) {
      if (component.contains("stream") && !component["stream"].is_null()) {
        const auto& stream = component["stream"];
        if (stream.contains("streamRtspUri") && !stream["streamRtspUri"].get<std::string>().empty()) {
          uris.push_back(stream["streamRtspUri"]);
        }
      }
    }
    std::cout << "Extracted RTSP URIs:" << std::endl;
    for (const auto& uri : uris) {
      std::cout << uri << std::endl;
    }
    uint num_sources = uris.size();

    // Get screen_output from schema_json, default to false if not present
    bool screen_output = false;
    if (schema_json.contains("pipelineConfig") && schema_json["pipelineConfig"].contains("screen_output")) {
      screen_output = schema_json["pipelineConfig"]["screen_output"].get<bool>();
    }

    Pipeline pipeline("deepstream-app");
    for (uint i = 0; i < num_sources; i++) {
      std::string name = "src_" + std::to_string(i);
      pipeline.add("nvurisrcbin", name, "uri", uris[i], "rtsp-reconnect-interval", 15, "rtsp-reconnect-attempts", -1);
    }
    pipeline.add("nvstreammux", "mux", "batch-size", num_sources, "width", get_muxer_width(), "height", get_muxer_height())
            .add("queue", "queue_infer")
            .add("nvinfer", "infer", "config-file-path", PGIE_CONFIG_FILE, "batch-size", num_sources)
            .add("queue", "queue_tracker")
            .add("nvtracker", "tracker", "ll-config-file", TRACKER_LL_CONFIG_FILE, "ll-lib-file", TRACKER_LL_LIB_FILE)
            .add("queue", "queue_analytics")
            .add("nvdsanalytics", "analytics", "config-file", ANALYTICS_CONFIG_FILE);

    if (screen_output) {
      pipeline.add("tee", "tee_analytics")
              .add("queue", "queue_video")
              .add("nvmultistreamtiler", "tiler", "width", get_muxer_width(), "height", get_muxer_height())
              .add("nvvideoconvert", "converter")
              .add("nvdsosd", "osd")
              .add("nveglglessink", "sink")
              .add("queue", "queue_msgconv")
              .add("nvmsgconv", "msgconv", "msg2p-lib", MSGCONV_PROTO_LIB, "config", MSGCONV_CONFIG_FILE, "payload-type", 1, "msg2p-newapi", 1, "multiple-payloads", 0, "frame-interval", 1)
              .add("queue", "queue_msgbroker")
              .add("nvmsgbroker", "msgbroker", "conn-str", MSGBROKER_CONN_STR, "proto-lib", MSGBROKER_PROTO_LIB, "sync", false, "topic", MGSBROKER_TOPIC, "new-api", 1, "config", MSGBROKER_CONFIG_FILE);
      // Main pipeline up to tee
      pipeline.link("mux", "queue_infer", "infer", "queue_tracker", "tracker", "queue_analytics", "analytics", "tee_analytics");
      // Video branch
      pipeline.link("tee_analytics", "queue_video", "tiler", "converter", "osd", "sink");
      // Messaging branch
      pipeline.link("tee_analytics", "queue_msgconv", "msgconv", "queue_msgbroker", "msgbroker");
    } else {
      pipeline.add("queue", "queue_msgconv")
              .add("nvmsgconv", "msgconv", "msg2p-lib", MSGCONV_PROTO_LIB, "config", MSGCONV_CONFIG_FILE, "payload-type", 1, "msg2p-newapi", 1, "multiple-payloads", 0, "frame-interval", 1)
              .add("queue", "queue_msgbroker")
              .add("nvmsgbroker", "msgbroker", "conn-str", MSGBROKER_CONN_STR, "proto-lib", MSGBROKER_PROTO_LIB, "sync", false, "topic", MGSBROKER_TOPIC, "new-api", 1, "config", MSGBROKER_CONFIG_FILE);
      pipeline.link("mux", "queue_infer", "infer", "queue_tracker", "tracker", "queue_analytics", "analytics", "queue_msgconv", "msgconv", "queue_msgbroker", "msgbroker");
    }
    // Link sources
    for (uint i = 0; i < num_sources; i++) {
      std::string src = "src_" + std::to_string(i);
      pipeline.link({src, "mux"}, {"", "sink_%u"});
    }
    pipeline.start().wait();
  } catch (const std::exception &e) {
    std::cerr << e.what() << std::endl;
    return -1;
  }
  return 0;
}

Thank you

Thank you for sharing your implementation!

Hi
Any news on that bug?
Thanks

The code you post is similar to /opt/nvidia/deepstream/deepstream/service-maker/sources/apps/cpp/deepstream_test3_app, I’ve tested with the attached code, it works well with restarting the mediaMTX stream. For live stream case, all configurations should follow the ordinary DeepStream pipeline configuration rules.

deepstream_test3.cpp (3.7 KB)

Hello,

Thanks for your answer.

Honestly, the problem is intermittent. Sometimes the reconnection works, sometimes it doesn’t. Nevertheless, I have the impression that these parameters could strengthen the reconnection: “batched-push-timeout”, 33333, “live-source”, 1.

However, I wonder why this seems to help. Does “batched-push-timeout” need to be defined necessarily? What is the default value? Isn’t “live-source” deprecated (See Gst-nvstreammux New — DeepStream documentation)?

Thanks

For the nvstreammux settings, please refer to DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

“live-source” is always there. It is important for live stream inputs cases.

Why is it presented at deprecated in the table then ?

It is “new nvstreammux” but not “nvstreammux”.

Isn’t deepstream 7.1 defaulting to new nvstreammux?

No. The nvstreammux is the default. Please read the “Note” in the document. Gst-nvstreammux New — DeepStream documentation

Thank you.

It seems to be working better now. However, with 4 input RTSP streams, after some time, I experience this issue with one of the streams, and with more than one stream if I wait longer:

Debug info: gstdsnvurisrcbin.cpp(1552): watch_source_status (): /GstPipeline:deepstream-app/GstDsNvUriSrcBin:src_0
WARNING from src: Could not write to resource.
Debug info: ../gst/rtsp/gstrtspsrc.c(6607): gst_rtspsrc_try_send (): /GstPipeline:deepstream-app/GstDsNvUriSrcBin:src_0/GstRTSPSrc:src:
Could not send message. (Received end-of-file)
WARNING from src: Could not write to resource.
Debug info: ../gst/rtsp/gstrtspsrc.c(9034): gst_rtspsrc_pause (): /GstPipeline:deepstream-app/GstDsNvUriSrcBin:src_0/GstRTSPSrc:src:
Could not send message. (Received end-of-file)
WARNING from src: Could not open resource for reading.
Debug info: ../gst/rtsp/gstrtspsrc.c(6427): gst_rtspsrc_setup_auth (): /GstPipeline:deepstream-app/GstDsNvUriSrcBin:src_0/GstRTSPSrc:src:
No supported authentication protocol was found
WARNING from src: Not found
Debug info: ../gst/rtsp/gstrtspsrc.c(6736): gst_rtspsrc_send (): /GstPipeline:deepstream-app/GstDsNvUriSrcBin:src_0/GstRTSPSrc:src:
Not Found (404)
Resetting source -1, attempts: 2

I’ve checked, and that stream src_0 is available; I can consume the stream. Any idea?

How did you check it?

Consuming it with ffplay and vlc for a few minutes

ffplay and vlc supports error resilience and some non-standard AVC or HEVC streams while GStreamer open source RTSP elements do not support. Please make sure your RTSP server strictly follows RFC 2326. rtspsrc

You may also need to check whether the stream generated by your RTSP server has suitable IDR interval.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.