Why Does rtmpsink Only Establish a Connection on Program Exit When Secondary GIE Is Enabled?

Please provide complete information as applicable to your setup.

Version (docker: nvcr.io/nvidia/deepstream:6.0.1-devel)

Hardware Platform (Jetson / GPU): GPU
deepstream-app version 6.0.1
DeepStreamSDK 6.0.1
CUDA Driver Version: 12.0
CUDA Runtime Version: 11.4
TensorRT Version: 8.0
cuDNN Version: 8.2
libNVWarp360 Version: 2.0.1d3

Issue: rtmpsink Only Establish a Connection on Program Exit When Secondary GIE Is Enabled (everything works fine with RTSP)

log: (In both cases, only the enable terms of the Secondary GIE have been modified)

Disable Secondary GIE:

GST_DEBUG=0,rtmpsink:5 ./deepstream-app -c /root/workspace/VStreamer/workspace/config/app_config.txt
0:00:00.033554943 871413 0x55f43accd580 DEBUG               rtmpsink gstrtmpsink.c:361:gst_rtmp_sink_uri_set_uri:<sink_sub_bin_rtmpsink1> Changed URI to rtmp://localhost:1935/live/video
** INFO: <create_rtmp_sink:605>: Launched RTMP Streaming at rtmp://localhost:1935/live/video
0:00:00.045049517 871413 0x55f43accd580 DEBUG               rtmpsink gstrtmpsink.c:179:gst_rtmp_sink_start:<sink_sub_bin_rtmpsink1> Created RTMP object
0:00:01.232532087 871413 0x55f43accd580 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/root/workspace/VStreamer/workspace/pphuman/humandet.onnx_b8_gpu0_fp16.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT image           3x640x640       min: 1x3x640x640     opt: 8x3x640x640     Max: 8x3x640x640     
1   OUTPUT kFLOAT concat_14.tmp_0 1x8400          min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT p2o.Mul.157     8400x4          min: 0               opt: 0               Max: 0               

0:00:01.232634500 871413 0x55f43accd580 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /root/workspace/VStreamer/workspace/pphuman/humandet.onnx_b8_gpu0_fp16.engine
0:00:01.234060726 871413 0x55f43accd580 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/root/workspace/VStreamer/workspace/config/../pphuman/primary_gie_config.txt sucessfully

Runtime commands:
        h: Print this help
        q: Quit

        p: Pause
        r: Resume

** INFO: <bus_callback:194>: Pipeline ready


**PERF:  FPS 0 (Avg)
**PERF:  0.00 (0.00)
0:00:06.469217111 871413 0x55f43acccd40 DEBUG               rtmpsink gstrtmpsink.c:402:gst_rtmp_sink_setcaps:<sink_sub_bin_rtmpsink1> caps set to video/x-flv
0:00:06.469314223 871413 0x55f43acccd40 DEBUG               rtmpsink gstrtmpsink.c:402:gst_rtmp_sink_setcaps:<sink_sub_bin_rtmpsink1> caps set to video/x-flv, streamheader=(buffer)< 464c5601010000000900000000, 120000fd0000000000000002000a6f6e4d657461446174610800000009000c766964656f636f646563696400401c0000000000000005776964746800409e0000000000000006686569676874004090e00000000000000c417370656374526174696f58003ff0000000000000000c417370656374526174696f59003ff000000000000000096672616d6572617465004039000000000000000d766964656f6461746172617465000000000000000000000f6d6574616461746163726561746f7202001a4753747265616d657220312e31342e3520464c56206d75786572000c6372656174696f6e6461746502001753756e204f637420382030363a33353a3039203230323300000900000108, 0900002d0000000000000017000000000142c028ffe100196742c02895900780227e5c05a808080a000007d0000186a10801000468cb8f2000000038 >
0:00:06.469324007 871413 0x55f43acccd40 DEBUG               rtmpsink gstrtmpsink.c:443:gst_rtmp_sink_setcaps:<sink_sub_bin_rtmpsink1> have 341 bytes of header data
** INFO: <bus_callback:180>: Pipeline running

0:00:06.635790964 871413 0x55f43acccd40 DEBUG               rtmpsink gstrtmpsink.c:253:gst_rtmp_sink_render:<sink_sub_bin_rtmpsink1> Opened connection to rtmp://localhost:1935/live/video
**PERF:  27.44 (26.79)
**PERF:  24.81 (25.51)
**PERF:  24.79 (25.40)
q
Quitting
App run successful

Enable Secondary GIE:

0:00:00.035810427 872003 0x55ae4d11e180 DEBUG               rtmpsink gstrtmpsink.c:361:gst_rtmp_sink_uri_set_uri:<sink_sub_bin_rtmpsink1> Changed URI to rtmp://localhost:1935/live/video
** INFO: <create_rtmp_sink:605>: Launched RTMP Streaming at rtmp://localhost:1935/live/video
0:00:00.047472562 872003 0x55ae4d11e180 DEBUG               rtmpsink gstrtmpsink.c:179:gst_rtmp_sink_start:<sink_sub_bin_rtmpsink1> Created RTMP object
0:00:01.189414405 872003 0x55ae4d11e180 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 2]: deserialized trt engine from :/root/workspace/VStreamer/workspace/pphuman/pphgnet.onnx_b8_gpu0_fp16.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 2
0   INPUT  kFLOAT x               3x256x192       min: 1x3x256x192     opt: 8x3x256x192     Max: 8x3x256x192     
1   OUTPUT kFLOAT sigmoid_12.tmp_0 26              min: 0               opt: 0               Max: 0               

0:00:01.189512875 872003 0x55ae4d11e180 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 2]: Use deserialized engine model: /root/workspace/VStreamer/workspace/pphuman/pphgnet.onnx_b8_gpu0_fp16.engine
0:00:01.190261401 872003 0x55ae4d11e180 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_0> [UID 2]: Load new model:/root/workspace/VStreamer/workspace/config/../pphuman/secondary_gie_config.txt sucessfully
0:00:01.306955924 872003 0x55ae4d11e180 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/root/workspace/VStreamer/workspace/pphuman/humandet.onnx_b8_gpu0_fp16.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT image           3x640x640       min: 1x3x640x640     opt: 8x3x640x640     Max: 8x3x640x640     
1   OUTPUT kFLOAT concat_14.tmp_0 1x8400          min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT p2o.Mul.157     8400x4          min: 0               opt: 0               Max: 0               

0:00:01.307041475 872003 0x55ae4d11e180 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /root/workspace/VStreamer/workspace/pphuman/humandet.onnx_b8_gpu0_fp16.engine
0:00:01.308448926 872003 0x55ae4d11e180 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/root/workspace/VStreamer/workspace/config/../pphuman/primary_gie_config.txt sucessfully

Runtime commands:
        h: Print this help
        q: Quit

        p: Pause
        r: Resume

** INFO: <bus_callback:194>: Pipeline ready


**PERF:  FPS 0 (Avg)
**PERF:  0.00 (0.00)
**PERF:  0.00 (0.00)
**PERF:  0.00 (0.00)
**PERF:  0.00 (0.00)
q
Quitting
0:00:21.290012779 872003 0x55ae4d11d990 DEBUG               rtmpsink gstrtmpsink.c:402:gst_rtmp_sink_setcaps:<sink_sub_bin_rtmpsink1> caps set to video/x-flv
0:00:21.290078774 872003 0x55ae4d11d990 DEBUG               rtmpsink gstrtmpsink.c:402:gst_rtmp_sink_setcaps:<sink_sub_bin_rtmpsink1> caps set to video/x-flv, streamheader=(buffer)< 464c5601000000000900000000, 1200006b0000000000000002000a6f6e4d657461446174610800000002000f6d6574616461746163726561746f7202001a4753747265616d657220312e31342e3520464c56206d75786572000c6372656174696f6e6461746502001753756e204f637420382030363a33363a3034203230323300000900000076 >
0:00:21.290089123 872003 0x55ae4d11d990 DEBUG               rtmpsink gstrtmpsink.c:443:gst_rtmp_sink_setcaps:<sink_sub_bin_rtmpsink1> have 135 bytes of header data
0:00:21.378224896 872003 0x55ae4d11d990 DEBUG               rtmpsink gstrtmpsink.c:253:gst_rtmp_sink_render:<sink_sub_bin_rtmpsink1> Opened connection to rtmp://localhost:1935/live/video
App run successful

Details

Command used:

GST_DEBUG=0,rtmpsink:5 ./deepstream-app -c /root/workspace/config/app_config.txt

Configuration file used:

app_config.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[source0]
enable=1
gpu-id=0
type=2
latency=1000
uri=http://localhost:1046/live?port=1045&app=live&stream=c_4_live

[streammux]
gpu-id=0
live-source=1
buffer-pool-size=4
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
sync-inputs=1
max-latency=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
config-file=../pphuman/primary_gie_config.txt

[secondary-gie0]
enable=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
config-file=../pphuman/secondary_gie_config.txt

[sink0]
enable=1
gpu-id=0
type=4
sync=0
qos=0
container=1
codec=1
bitrate=4000000
iframeinterval=50
enc-type=0
source-id=0
output-file=rtmp://localhost:1935/live/video

primary_gie_config.txt

[property]
interval=1
net-scale-factor=0.0039215697906911373
model-color-format=0
maintain-aspect-ratio=1
symmetric-padding=0
cluster-mode=2
batch-size=8
num-detected-classes=1
labelfile-path=humandet.txt
network-mode=2
onnx-file=humandet.onnx
model-engine-file=humandet.onnx_b8_gpu0_fp16.engine
network-type=0
parse-bbox-func-name=NvDsInferParseCustomPPYOLOE
custom-lib-path=../config/libparse.so

[class-attrs-all]
pre-cluster-threshold=0.5
nms-iou-threshold=0.4
topk=255

secondary_gie_config.txt

[property]
classifier-threshold=0.5
net-scale-factor=0.0039215697906911373
model-color-format=0
maintain-aspect-ratio=1
symmetric-padding=0
batch-size=8
labelfile-path=pphgnet.txt
network-mode=2
onnx-file=pphgnet.onnx
model-engine-file=pphgnet.onnx_b8_gpu0_fp16.engine
network-type=1
parse-classifier-func-name=NvDsInferClassiferParseCustomPPHGNET
custom-lib-path=../config/libparse.so

Code to enable rtmpsink: (modified <deepstream_sink_bin.c> <create_udpsink_bin>)

static gboolean create_udpsink_bin(NvDsSinkEncoderConfig *config,
                                   NvDsSinkBinSubBin *bin) {
  GstCaps *caps = NULL;
  gboolean ret = FALSE;
  gchar elem_name[50];
  gchar encode_name[50];
  int probe_id = 0;

  // guint rtsp_port_num = g_rtsp_port_num++;
  uid++;

  g_snprintf(elem_name, sizeof(elem_name), "sink_sub_bin%d", uid);
  bin->bin = gst_bin_new(elem_name);
  if (!bin->bin) {
    NVGSTDS_ERR_MSG_V("Failed to create '%s'", elem_name);
    goto done;
  }

  g_snprintf(elem_name, sizeof(elem_name), "sink_sub_bin_queue%d", uid);
  bin->queue = gst_element_factory_make(NVDS_ELEM_QUEUE, elem_name);
  if (!bin->queue) {
    NVGSTDS_ERR_MSG_V("Failed to create '%s'", elem_name);
    goto done;
  }

  g_snprintf(elem_name, sizeof(elem_name), "sink_sub_bin_transform%d", uid);
  bin->transform = gst_element_factory_make(NVDS_ELEM_VIDEO_CONV, elem_name);
  if (!bin->transform) {
    NVGSTDS_ERR_MSG_V("Failed to create '%s'", elem_name);
    goto done;
  }

  g_snprintf(elem_name, sizeof(elem_name), "sink_sub_bin_cap_filter%d", uid);
  bin->cap_filter = gst_element_factory_make(NVDS_ELEM_CAPS_FILTER, elem_name);
  if (!bin->cap_filter) {
    NVGSTDS_ERR_MSG_V("Failed to create '%s'", elem_name);
    goto done;
  }

  if (config->enc_type == NV_DS_ENCODER_TYPE_SW)
    caps = gst_caps_from_string("video/x-raw, format=I420");
  else
    caps = gst_caps_from_string("video/x-raw(memory:NVMM), format=I420");

  g_object_set(G_OBJECT(bin->cap_filter), "caps", caps, NULL);

  g_snprintf(encode_name, sizeof(encode_name), "sink_sub_bin_encoder%d", uid);

  switch (config->codec) {
  case NV_DS_ENCODER_H264:
    bin->codecparse = gst_element_factory_make("h264parse", "h264-parser");
    if (config->enc_type == NV_DS_ENCODER_TYPE_SW)
      bin->encoder =
          gst_element_factory_make(NVDS_ELEM_ENC_H264_SW, encode_name);
    else
      bin->encoder =
          gst_element_factory_make(NVDS_ELEM_ENC_H264_HW, encode_name);
    break;
  case NV_DS_ENCODER_H265:
    bin->codecparse = gst_element_factory_make("h265parse", "h265-parser");
    g_object_set(G_OBJECT(bin->codecparse), "config-interval", -1, NULL);
    if (config->enc_type == NV_DS_ENCODER_TYPE_SW)
      bin->encoder =
          gst_element_factory_make(NVDS_ELEM_ENC_H265_SW, encode_name);
    else
      bin->encoder =
          gst_element_factory_make(NVDS_ELEM_ENC_H265_HW, encode_name);
    break;
  default:
    goto done;
  }

  if (!bin->encoder) {
    NVGSTDS_ERR_MSG_V("Failed to create '%s'", encode_name);
    goto done;
  }

  NVGSTDS_ELEM_ADD_PROBE(probe_id, bin->encoder, "sink", seek_query_drop_prob,
                         GST_PAD_PROBE_TYPE_QUERY_UPSTREAM, bin);

  probe_id = probe_id;

  if (config->enc_type == NV_DS_ENCODER_TYPE_SW) {
    // bitrate is in kbits/sec for software encoder x264enc and x265enc
    g_object_set(G_OBJECT(bin->encoder), "bitrate", config->bitrate / 1000,
                 NULL);
  } else {
    g_object_set(G_OBJECT(bin->encoder), "bitrate", config->bitrate, NULL);
    g_object_set(G_OBJECT(bin->encoder), "profile", config->profile, NULL);
    g_object_set(G_OBJECT(bin->encoder), "iframeinterval",
                 config->iframeinterval, NULL);
  }

  struct cudaDeviceProp prop;
  cudaGetDeviceProperties(&prop, config->gpu_id);

  if (prop.integrated) {
    if (config->enc_type == NV_DS_ENCODER_TYPE_HW) {
      g_object_set(G_OBJECT(bin->encoder), "preset-level", 1, NULL);
      g_object_set(G_OBJECT(bin->encoder), "insert-sps-pps", 1, NULL);
      g_object_set(G_OBJECT(bin->encoder), "bufapi-version", 1, NULL);
    }
  } else {
    g_object_set(G_OBJECT(bin->transform), "gpu-id", config->gpu_id, NULL);
  }

  if (g_str_has_prefix(config->output_file_path, "rtsp"))
    ret = create_rtsp_sink(config, bin);
  else if (g_str_has_prefix(config->output_file_path, "rtmp"))
    ret = create_rtmp_sink(config, bin);
  else
    NVGSTDS_ERR_MSG_V("[%s] output_path can not parse!", __func__);

done:
  if (caps) {
    gst_caps_unref(caps);
  }
  if (!ret) {
    NVGSTDS_ERR_MSG_V("%s failed", __func__);
  }
  return ret;
}


static gboolean create_rtmp_sink(NvDsSinkEncoderConfig *config,
                                 NvDsSinkBinSubBin *bin) {
  gboolean ret = FALSE;
  gchar elem_name[50];

  // flvmux
  g_snprintf(elem_name, sizeof(elem_name), "sink_sub_bin_flvmux%d", uid);
  bin->mux = gst_element_factory_make("flvmux", elem_name);
  if (!bin->mux) {
    NVGSTDS_ERR_MSG_V("Failed to create '%s'", elem_name);
    goto done;
  }
  g_object_set(G_OBJECT(bin->mux), "streamable", TRUE, NULL);

  // rtmpsink
  g_snprintf(elem_name, sizeof(elem_name), "sink_sub_bin_rtmpsink%d", uid);
  bin->sink = gst_element_factory_make("rtmpsink", elem_name);
  if (!bin->sink) {
    NVGSTDS_ERR_MSG_V("Failed to create '%s'", elem_name);
    goto done;
  }
  g_object_set(G_OBJECT(bin->sink), "location", config->output_file_path, NULL);

  // add and link
  gst_bin_add_many(GST_BIN(bin->bin), bin->queue, bin->cap_filter,
                   bin->transform, bin->encoder, bin->codecparse, bin->mux,
                   bin->sink, NULL);
  NVGSTDS_LINK_ELEMENT(bin->queue, bin->transform);
  NVGSTDS_LINK_ELEMENT(bin->transform, bin->cap_filter);
  NVGSTDS_LINK_ELEMENT(bin->cap_filter, bin->encoder);
  NVGSTDS_LINK_ELEMENT(bin->encoder, bin->codecparse);
  NVGSTDS_LINK_ELEMENT(bin->codecparse, bin->mux);
  NVGSTDS_LINK_ELEMENT(bin->mux, bin->sink);
  NVGSTDS_BIN_ADD_GHOST_PAD(bin->bin, bin->queue, "sink");

  NVGSTDS_INFO_MSG_V("Launched RTMP Streaming at %s", config->output_file_path);

  ret = TRUE;

done:
  return TRUE;
}

Hi there,

I hope you’re doing well. I posted a question a little while ago, but it seems to have gone unnoticed or unanswered. If you have any insights or suggestions regarding my question, I would greatly appreciate it. I understand that everyone is busy, but any assistance would be valuable to me.

Thank you in advance for your time and consideration.

1.You can try update to lastest version (DS-6.3), Maybe It is a issue about DS-6.0.
2.If you just use file sink, Can you code work fine ? I think filesink is same to rtmpsink.
3.try GST_DEBUG_DUMP_DOT_DIR=. before run app, For dump the pipeline to svg image .make sure all elements work normally on pipeline.
4.You can try this patch provided by me. RtmpSink is just streaming flv

1 Like

I’m happy to report that I successfully resolved my problem following your guidance.

I tried using the patch you provided, and it worked perfectly. Then, I carefully compared the two pipelines and eventually found the reason why my code’s rtmpsink couldn’t establish a connection. It was because the async property of rtmpsink is set to true by default, causing rtmpsink to remain in the READY state and return ASYNC when transitioning to PAUSED. This issue was resolved by setting the async property of rtmpsink to false.

I wanted to express my heartfelt gratitude for your invaluable assistance in resolving my issue. Once again, thank you so much for your kindness and expertise.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.