How to output rtsp stream in yaml in serice maker?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
rtx 2060
• DeepStream Version
deepstream 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
with deepstream 7.0 docker image
• NVIDIA GPU Driver Version (valid for GPU only)
535
• Issue Type( questions, new requirements, bugs)
questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

How to output rtsp stream in yaml in serice maker?
I tried to use rtps stream output in deepstream service maker. I used /opt/nvidia/deepstream/deepstream/service-maker/sources/apps/deepstream_test1_app/deepstream_test1.cpp and configured as follows:

deepstream:
  nodes:
  - type: nvurisrcbin
    name: src
    properties:
      uri: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4
  - type: nvstreammux
    name: mux
    properties:
      batch-size: 1
      width: 1280
      height: 720
  - type: nvinferbin
    name: infer
    properties:
      config-file-path: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.yml
  - type: nvosdbin
    name: osd
  - type: tee
    name: tee
  - type: queue
    name: queue1
  - type: queue
    name: queue2
  - type: fakesink
    name: sink1
  - type: nveglglessink
    name: sink3
  - type: nvvideoconvert
    name: nvvidconv
  - type: nvv4l2h264enc
    name: encoder
    properties:
      bitrate: 4000
  - type: rtph264pay
    name: rtppay
    properties:
      pt: 96
      config-interval: 1
  - type: udpsink
    name: sink2
    properties:
      host: 127.0.0.1
      port: 5000
      sync: false
      async: false
  edges:
    src: mux
    mux: infer
    infer: osd
    osd: tee
    tee: [queue1, queue2]
    queue1: sink3
    queue2: nvvidconv
    nvvidconv: encoder
    encoder: rtppay
    rtppay: sink2

I got stuck after displaying the first frame, and it seems to be completely wrong. How should I modify it?
My command:export GST_DEBUG=3 & ./my-deepstream-app ,and output is as follows:

0:00:10.855621383  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat VP80
0:00:10.855682458  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:10.855737089  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat VP80
0:00:10.855819347  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:10.855899897  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat H264
0:00:10.855960245  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:10.856023907  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat H264
0:00:10.856106552  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:10.856172657  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat Y444
0:00:10.856267144  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:10.856326005  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat Y444
0:00:10.856397060  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:10.856992810  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat P410
0:00:10.857057131  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:10.857202295  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat P410
0:00:10.857297812  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:10.857380289  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat PM10
0:00:10.857439668  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:10.857493495  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat PM10
0:00:10.857561602  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:10.857617499  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat NM12
0:00:10.857684445  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:10.857738454  3566 0x71d410002590 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat NM12
*** Inside cb_newpad name=video/x-raw
0:00:10.977530091  3566 0x71d410002590 WARN            v4l2videodec gstv4l2videodec.c:2311:gst_v4l2_video_dec_decide_allocation:<nvv4l2decoder0> Duration invalid, not setting latency
0:00:10.980002465  3566 0x71d410002590 WARN          v4l2bufferpool gstv4l2bufferpool.c:1116:gst_v4l2_buffer_pool_start:<nvv4l2decoder0:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:10.984760466  3566 0x71d410003470 WARN          v4l2bufferpool gstv4l2bufferpool.c:1567:gst_v4l2_buffer_pool_dqbuf:<nvv4l2decoder0:pool:src> Driver should never set v4l2_buffer.field to ANY
0:00:12.183570806  3566 0x71d410001460 WARN          v4l2bufferpool gstv4l2bufferpool.c:1116:gst_v4l2_buffer_pool_start:<encoder:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:12.357048840  3566 0x71d410003870 WARN          v4l2bufferpool gstv4l2bufferpool.c:1567:gst_v4l2_buffer_pool_dqbuf:<encoder:pool:src> Driver should never set v4l2_buffer.field to ANY

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

and ./my-deepstream-app’s output log:

Element: nvurisrcbin, Name: src
  set uri: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4
Add Element ... src
Element: nvstreammux, Name: mux
  set batch-size: 1
  set width: 1280
  set height: 720
Add Element ... mux
Element: nvinferbin, Name: infer
  set config-file-path: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.yml
Add Element ... infer
Element: nvosdbin, Name: osd
Add Element ... osd
Element: tee, Name: tee
Add Element ... tee
Element: queue, Name: queue1
Add Element ... queue1
Element: queue, Name: queue2
Add Element ... queue2
Element: fakesink, Name: sink1
Add Element ... sink1
Element: nveglglessink, Name: sink3
Add Element ... sink3
Element: nvvideoconvert, Name: nvvidconv
Add Element ... nvvidconv
Element: nvv4l2h264enc, Name: encoder
  set bitrate: 4000
Add Element ... encoder
Element: rtph264pay, Name: rtppay
  set pt: 96
  set config-interval: 1
Add Element ... rtppay
Element: udpsink, Name: sink2
  set host: 127.0.0.1
  set port: 5000
  set sync: false
  set async: false
Add Element ... sink2
LINKING: Source: src Target: mux
0:00:00.131680043  3882 0x56ad2dd3af00 ERROR            nvstreammux gstnvstreammux.cpp:1611:gst_nvstreammux_request_new_pad:<mux> Pad should be named 'sink_%u' when requesting a pad
LINKING: Source: mux Target: infer
LINKING: Source: infer Target: osd
LINKING: Source: osd Target: tee
LINKING: Source: tee Target: queue1
LINKING: Source: tee Target: queue2
LINKING: Source: queue1 Target: sink3
LINKING: Source: queue2 Target: nvvidconv
LINKING: Source: nvvidconv Target: encoder
LINKING: Source: encoder Target: rtppay
LINKING: Source: rtppay Target: sink2
0:00:08.650949653  3882 0x7407c511e2b0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.0/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60         

0:00:08.854555003  3882 0x7407c511e2b0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.0/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
0:00:08.864255680  3882 0x7407c511e2b0 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<nvinfer_bin_nvinfer> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.yml sucessfully
Event Thread Enabled...
Main Loop Running...
*** Inside cb_newpad name=video/x-raw

I also used the following configuration

deepstream:
  nodes:
  - type: nvurisrcbin
    name: src
    properties:
      uri: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4
  - type: nvstreammux
    name: mux
    properties:
      batch-size: 1
      width: 1280
      height: 720
  - type: nvinferbin
    name: infer
    properties:
      config-file-path: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.yml
  - type: nvosdbin
    name: osd
  - type: tee
    name: tee
  - type: queue
    name: queue1
  - type: queue
    name: queue2
  - type: fakesink
    name: sink1
  - type: nveglglessink
    name: sink3
  # - type: nvvideoconvert
  #   name: nvvidconv
  # - type: nvv4l2h264enc
  #   name: encoder
  #   properties:
  #     bitrate: 4000
  # - type: rtph264pay
  #   name: rtppay
  #   properties:
  #     pt: 96
  #     config-interval: 1
  # - type: udpsink
  #   name: sink2
  #   properties:
  #     host: 127.0.0.1
  #     port: 5000
  #     sync: false
  #     async: false
  - type: nvmultistreamtiler
    name: tiler
    properties:
      width: 1280
      height: 720

  - type: nvrtspoutsinkbin
    name: sink4
  edges:
    src: mux
    mux: infer
    infer: osd
    osd: tee
    tee: [queue1, queue2]
    queue1: sink3
    queue2: tiler
    tiler: sink4
    # nvvidconv: encoder
    # encoder: rtppay
    # rtppay: sink2

But the result is:

Element ---- nvmultistreamtiler RefName tiler
width:1280
height:720
Element ---- nvrtspoutsinkbin RefName sink4
Element: nvurisrcbin, Name: src
  set uri: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4
Add Element ... src
Element: nvstreammux, Name: mux
  set batch-size: 1
  set width: 1280
  set height: 720
Add Element ... mux
Element: nvinferbin, Name: infer
  set config-file-path: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.yml
Add Element ... infer
Element: nvosdbin, Name: osd
Add Element ... osd
Element: tee, Name: tee
Add Element ... tee
Element: queue, Name: queue1
Add Element ... queue1
Element: queue, Name: queue2
Add Element ... queue2
Element: fakesink, Name: sink1
Add Element ... sink1
Element: nveglglessink, Name: sink3
Add Element ... sink3
Element: nvmultistreamtiler, Name: tiler
  set width: 1280
  set height: 720
Add Element ... tiler
Element: nvrtspoutsinkbin, Name: sink4
Add Element ... sink4
LINKING: Source: src Target: mux
LINKING: Source: mux Target: infer
LINKING: Source: infer Target: osd
LINKING: Source: osd Target: tee
LINKING: Source: tee Target: queue1
LINKING: Source: tee Target: queue2
LINKING: Source: queue1 Target: sink3
LINKING: Source: queue2 Target: tiler
LINKING: Source: tiler Target: sink4

 *** sink4: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

0:00:07.164143078 10916 0x768c9d1bfae0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.0/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:612 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60         

0:00:07.294324376 10916 0x768c9d1bfae0 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer_bin_nvinfer> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.0/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
0:00:07.301964328 10916 0x768c9d1bfae0 INFO                 nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<nvinfer_bin_nvinfer> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.yml sucessfully
Event Thread Enabled...
Main Loop Running...
*** Inside cb_newpad name=video/x-raw

It still gets stuck at the first frame,so,what can i do?

The Service Maker sample applications just show how to use the Service Maker APIs to generate pipeline. You can use Service Maker APIs to generate udp server pipeline with “udpsink” and use “GstRTSPServer” APIs to generate RTSP stream from the udp stream. There is GstRTSPServer APIs usage sample in deepstream-app source code. /opt/nvidia/deepstream/deepstream/sources/apps/apps-common/src/deepstream_sink_bin.c

@Fiona.Chen Service Maker is a big simplification of DeepStream C, and using only YAML is a big simplification of general affairs. What you mean is: Service Maker is not mature yet. It is recommended to use the original C code, right?

As explained in What is Deepstream Service Maker — DeepStream documentation 6.4 documentation, Service Maker is a new set of APIs to simplify the development of DeepStream applications. It is alpha version now. The RTSP server part is not implemented in Service Maker now. You can use Service Maker to generate the UDP stream and use “GstRTSPServer” APIs to generate RTSP stream from the udp stream. They are not conflict.

There is “nvrtspoutsinkbin” element in DeepStream, you can use this element as the sink in your pipeline to output RTSP stream.

@Fiona.Chen
In the above code, I have used - type: nvrtspoutsinkbin
name: sink4 , although it shows the rtsp stream address, it is stuck. My complete code is as follows:

deepstream:
  nodes:
  - type: nvurisrcbin
    name: src
    properties:
      uri: file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4
  - type: nvstreammux
    name: mux
    properties:
      batch-size: 1
      width: 1280
      height: 720
  - type: nvinferbin
    name: infer
    properties:
      config-file-path: /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.yml
  - type: nvosdbin
    name: osd
  - type: tee
    name: tee
  - type: queue
    name: queue1
  - type: queue
    name: queue2
  - type: fakesink
    name: sink1
  - type: nvmultistreamtiler
    name: tiler
    properties:
      width: 1280
      height: 720

  - type: nvrtspoutsinkbin
    name: sink4
  edges:
    src: mux
    mux: infer
    infer: osd
    osd: tee
    tee: [queue1, queue2]
    queue1: sink3
    queue2: tiler
    tiler: sink4


It works. I’ve tried the attached config with the /opt/nvidia/deepstream/deepstream/service-maker/sources/apps/deepstream_test1_app sample
dstest1_config.yaml (2.0 KB)

thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.