Is it correct that DeepStream Python bindings cannot be used to analyze RTP video streams?

As title implies: Is it correct that DeepStream Python bindings cannot be used to analyze RTP video streams?

Software part of jetson-stats 4.2.12 - (c) 2024, Raffaello Bonghi
Model: NVIDIA Orin Nano Developer Kit - Jetpack 5.1.4 [L4T 35.6.0]
NV Power Mode[0]: 15W
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
 - P-Number: p3767-0005
 - Module: NVIDIA Jetson Orin Nano (Developer kit)
Platform:
 - Distribution: Ubuntu 20.04 focal
 - Release: 5.10.216-tegra
jtop:
 - Version: 4.2.12
 - Service: Active
Libraries:
 - CUDA: 11.4.315
 - cuDNN: 8.6.0.166
 - TensorRT: 8.5.2.2
 - VPI: 2.4.8
 - OpenCV: 4.9.0 - with CUDA: YES
DeepStream C/C++ SDK version: 6.3

Python Environment:
Python 3.8.10
    GStreamer:                   YES (1.16.3)
  NVIDIA CUDA:                   YES (ver 11.4, CUFFT CUBLAS FAST_MATH)
        OpenCV version: 4.9.0  CUDA True
          YOLO version: 8.3.33
         Torch version: 2.1.0a0+41361538.nv23.06
   Torchvision version: 0.16.1+fdea156
DeepStream SDK version: 1.1.8

what do you mean about โ€œDeepStream Python bindings cannot be used to analyze RTP video streamsโ€? could you share your use scenario? many source plugins like rtspsrc , nvurisrcbin, uridecodebin can used to receive RTP video streams.

We are using for realtime video footage, which is from FPV and time sensitive.

Currently, I just found that the gst stream on RTP protocol is slow and there is no such demo in deepstream python code repo.

So Iโ€™m asking if itโ€™s suitable for RTP video source to use deepstream framework.

Logically speaking, it can be used for RTP source. But I donโ€™t know if there is any configuration of pipline is wrong or what else.

Please share your experience and info with me.

you can use gst-launch to debug first. then you only need to replace decoding part with โ€œgst-launch-1.0 -v udpsrc port=5600 ! application/x-rtp,encoding-name=H265,payload=96 ! rtph265depay ! h265parse ! nvv4l2decoderโ€ in Python code. please refer to the following sample cmd.

gst-launch-1.0 -v  udpsrc port=5600 ! application/x-rtp,encoding-name=H265,payload=96 ! rtph265depay ! h265parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1920 height=1080 nvbuf-memory-type=3 ! nvinfer config-file-path=./ds_image_meta_pgie_config.txt ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvdsosd ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=./out.mp4

@fanzh Thank you.

I just wrote a python gst framework, and met some difficulties here: Gst-launch-1.0 only get 15FPS on jetson orin?

Hope solve this first and implement DeepStream later.

@fanzh Now I have got gst pipeline working at 1080p@60FPS. Letโ€™s take H265 pipeline for example:

RTP โ†’ rtph265depay โ†’ h265parse โ†’ nvv4l2decoder โ†’ nv3dsink

What I need to do is to implement deepstream code into the pipeline as follows:

RTP โ†’ rtph265depay โ†’ h265parse โ†’ nvv4l2decoder โ†’ nvstreammux โ†’ nvinfer โ†’ nvvideoconvert โ†’ nvdsosd โ†’ nv3dsink

Sorry I donโ€™t know how to insert to my code and configure those elements.

From deepstream-test1, it seems the pipeline has a code sequence to configure the pipeline as follows:

So does this mean I have to configure the pipeline in build_pipeline function?

if you are using Gst.parse_launch to start pipeline, please refer to my sample deepstream inference pipeline on Dec 3.
if you are not using Gst.parse_launch, please refer to the deepstream_test_1.py. you need to create all plugins, then link them, then call set_state to start the pipeline.

Which sample is Dec 3, do you mean deepstream_test_3.py?
And deepstream_test_3.py is quite simliar to deepstream_test_1.py?

EDIT1: I did search the git code, there is no example of parse_launchin v1.1.8

EDIT2: I did below pipeline, but didnโ€™t work.

    def build_pipeline(self):
        """
        Builds the GStreamer pipeline based on the input codec and port,
        integrating NVIDIA DeepStream components.
        """
        if self.input_codec == "h264":
            pipeline_str = (
                f"udpsrc port={self.port} ! "
                "application/x-rtp,encoding-name=H264,payload=96 ! "
                "rtph264depay ! h264parse ! nvv4l2decoder ! "
                "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! "
                "nvstreammux name=mux batch-size=1 width=1920 height=1080 batched-push-timeout=4000000 ! "
                "nvinfer config-file-path=config_infer_primary.txt ! "
                "nvvideoconvert ! nvdsosd ! nv3dsink name=sink sync=0"
            )
        else:
            pipeline_str = (
                f"udpsrc port={self.port} ! "
                "application/x-rtp,encoding-name=H265,payload=96 ! "
                "rtph265depay ! h265parse ! nvv4l2decoder ! "
                "nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! "
                "nvstreammux name=mux batch-size=1 width=1920 height=1080 batched-push-timeout=4000000 ! "
                "nvinfer config-file-path=config_infer_primary.txt ! "
                "nvvideoconvert ! nvdsosd ! nv3dsink name=sink sync=0"
            )
        print("Selected Pipeline:", pipeline_str)
        return Gst.parse_launch(pipeline_str)

It reports error:

$ python3 ./utils/deepstream.py 5000
Selected Pipeline: udpsrc port=5000 ! application/x-rtp,encoding-name=H265,payload=96 ! rtph265depay ! h265parse ! nvv4l2decoder ! nvvideoconvert ! video/x-raw(memory:NVMM),format=NV12 ! nvstreammux name=mux batch-size=1 width=1920 height=1080 batched-push-timeout=4000000 ! nvinfer config-file-path=config_infer_primary.txt ! nvvideoconvert ! nvdsosd ! nv3dsink name=sink sync=0
Traceback (most recent call last):
  File "./utils/deepstream.py", line 141, in <module>
    main()
  File "./utils/deepstream.py", line 137, in main
    streamer.run()
  File "./utils/deepstream.py", line 87, in run
    self.pipeline = self.build_pipeline()
  File "./utils/deepstream.py", line 67, in build_pipeline
    return Gst.parse_launch(pipeline_str)
gi.repository.GLib.Error: gst_parse_error: could not link nvvideoconvert0 to mux, mux can't handle caps video/x-raw(memory:NVMM), format=(string)NV12 (3)

I mean, please refer to this sample pipeline command-line.

gst-launch-1.0 -v  udpsrc port=5600 ! application/x-rtp,encoding-name=H265,payload=96 ! rtph265depay ! h265parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1920 height=1080 nvbuf-memory-type=3 ! nvinfer config-file-path=./ds_image_meta_pgie_config.txt ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvdsosd ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=./out.mp4

Anything wrong with my system, or any py component never installed???

daniel@daniel-nvidia:~/Work/jetson-fpv-deepstream$ python3 ./utils/deepstream.py 5600 --input-codec=h265
Selected Pipeline: udpsrc port=5600 ! application/x-rtp,encoding-name=H265,payload=96 ! rtph265depay ! h265parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1920 height=1080 nvbuf-memory-type=3 ! nvinfer config-file-path=./config_infer_primary.txt ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvdsosd ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=./out.mp4

(python3:9209): GStreamer-CRITICAL **: 11:19:45.690: gst_element_make_from_uri: assertion 'gst_uri_is_valid (uri)' failed
Traceback (most recent call last):
  File "./utils/deepstream.py", line 151, in <module>
    main()
  File "./utils/deepstream.py", line 147, in main
    streamer.run()
  File "./utils/deepstream.py", line 97, in run
    self.pipeline = self.build_pipeline()
  File "./utils/deepstream.py", line 77, in build_pipeline
    return Gst.parse_launch(pipeline_str)
gi.repository.GLib.Error: gst_parse_error: syntax error (0)



daniel@daniel-nvidia:~/Work/jetson-fpv-deepstream$ ls
2024-11-21_11-54-24.mp4  Clock_FPS_60_Recorded_by_Video_Viewer.mp4  deepstream_test_3_rtp.py  model   README.md  utils
2024-11-29_14-46-29.mp4  config_infer_primary.txt                   doc                       module  scripts    wrapper.sh
2024-11-29_14-53-41.mp4  deepstream.py                              LICENSE                   readme  test3.py




daniel@daniel-nvidia:~/Work/jetson-fpv-deepstream$ gst-launch-1.0 -v  udpsrc port=5600 ! application/x-rtp,encoding-name=H265,payload=96 ! rtph265depay ! h265parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1920 height=1080 nvbuf-memory-type=3 ! nvinfer config-file-path=./config_infer_primary.txt ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvdsosd ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! qtmux ! filesink location=./out.mp4
WARNING: erroneous pipeline: no element "nvv4l2h264enc"

nano does not support hardware encoding. please use software encoding instead. please refer to the following cmd.

gst-launch-1.0  filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4 ! qtdemux ! h264parse ! nvv4l2decoder  ! nvvideoconvert ! 'video/x-raw,format=I420' ! x264enc ! filesink location=test.264
1 Like

The program is stuck here, and the output file out.mp4 is empty.

  1. use dstest3_pgie_config.txt deepstream test3 configre file
  2. use software encoding
  3. execute code from dir: /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test3
  4. we have just get deepstream-test3 working, but I donโ€™t know whatโ€™s wrong with parse_launch

Is there any way to get pipeline string from deepstream-test3?

daniel@daniel-nvidia:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test3$ ls
config_infer_primary_peoplenet.txt  config_triton_grpc_infer_primary_peoplenet.txt  deepstream.py         deepstream_test_3_rtp.py       dstest3_pgie_config.txt
config.pbtxt                        config_triton_infer_primary_peoplenet.txt       deepstream_test_3.py  deepstream_test_3_rtp.py.orig  README



daniel@daniel-nvidia:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test3$ gst-launch-1.0 -v  udpsrc port=5600 ! application/x-rtp,encoding-name=H265,payload=96 ! rtph265depay ! h265parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1920 height=1080 nvbuf-memory-type=3 ! nvinfer config-file-path=./dstest3_pgie_config.txt ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvdsosd ! nvvideoconvert ! x264enc ! h264parse ! qtmux ! filesink location=./out.mp4
Setting pipeline to PAUSED ...
Opening in BLOCKING MODE
WARNING: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_python_apps/apps/deepstream-test3/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:03.229059334 103410 0xaaab02cdb130 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_python_apps/apps/deepstream-test3/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:03.434058178 103410 0xaaab02cdb130 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_python_apps/apps/deepstream-test3/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:03.434189804 103410 0xaaab02cdb130 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
WARNING: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine opened error
0:01:04.423319300 103410 0xaaab02cdb130 WARN                 nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2029> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640
1   OUTPUT kFLOAT conv2d_bbox     16x23x40
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:01:04.683676390 103410 0xaaab02cdb130 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:./dstest3_pgie_config.txt sucessfully
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = application/x-rtp, encoding-name=(string)H265, payload=(int)96, media=(string)video, clock-rate=(int)90000
/GstPipeline:pipeline0/GstRtpH265Depay:rtph265depay0.GstPad:src: caps = video/x-h265, stream-format=(string)byte-stream, alignment=(string)au
/GstPipeline:pipeline0/GstH265Parse:h265parse0.GstPad:sink: caps = video/x-h265, stream-format=(string)byte-stream, alignment=(string)au
/GstPipeline:pipeline0/GstRtpH265Depay:rtph265depay0.GstPad:sink: caps = application/x-rtp, encoding-name=(string)H265, payload=(int)96, media=(string)video, clock-rate=(int)90000
/GstPipeline:pipeline0/GstH265Parse:h265parse0.GstPad:src: caps = video/x-h265, stream-format=(string)byte-stream, alignment=(string)au, width=(int)1920, height=(int)1080, framerate=(fraction)60/1, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, profile=(string)main, tier=(string)main, level=(string)5
NvMMLiteOpen : Block : BlockType = 279
NvMMLiteBlockCreate : Block : BlockType = 279
/GstPipeline:pipeline0/nvv4l2decoder:nvv4l2decoder0.GstPad:sink: caps = video/x-h265, stream-format=(string)byte-stream, alignment=(string)au, width=(int)1920, height=(int)1080, framerate=(fraction)60/1, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true, profile=(string)main, tier=(string)main, level=(string)5
/GstPipeline:pipeline0/nvv4l2decoder:nvv4l2decoder0.GstPad:src: caps = video/x-raw(memory:NVMM), format=(string)NV12, width=(int)1920, height=(int)1080, interlace-mode=(string)progressive, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2, colorimetry=(string)bt709, framerate=(fraction)60/1, nvbuf-memory-type=(string)nvbuf-mem-surface-array, gpu-id=(int)0
/GstPipeline:pipeline0/GstNvStreamMux:mux.GstPad:src: caps = video/x-raw(memory:NVMM), format=(string)NV12, width=(int)1920, height=(int)1080, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, framerate=(fraction)60/1, nvbuf-memory-type=(string)nvbuf-mem-cuda-unified, gpu-id=(int)0, batch-size=(int)1, num-surfaces-per-frame=(int)1
/GstPipeline:pipeline0/GstNvInfer:nvinfer0.GstPad:src: caps = video/x-raw(memory:NVMM), format=(string)NV12, width=(int)1920, height=(int)1080, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, framerate=(fraction)60/1, nvbuf-memory-type=(string)nvbuf-mem-cuda-unified, gpu-id=(int)0, batch-size=(int)1, num-surfaces-per-frame=(int)1
/GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, framerate=(fraction)60/1, batch-size=(int)1, num-surfaces-per-frame=(int)1, format=(string)RGBA, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-surface-array, gpu-id=(int)0
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, framerate=(fraction)60/1, batch-size=(int)1, num-surfaces-per-frame=(int)1, format=(string)RGBA, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-surface-array, gpu-id=(int)0
/GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, framerate=(fraction)60/1, batch-size=(int)1, num-surfaces-per-frame=(int)1, format=(string)RGBA, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-surface-array, gpu-id=(int)0
/GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert1.GstPad:src: caps = video/x-raw, width=(int)1920, height=(int)1080, framerate=(fraction)60/1, format=(string)Y42B, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, batch-size=(int)1, num-surfaces-per-frame=(int)1, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-surface-array, gpu-id=(int)0
Redistribute latency...
/GstPipeline:pipeline0/GstX264Enc:x264enc0.GstPad:sink: caps = video/x-raw, width=(int)1920, height=(int)1080, framerate=(fraction)60/1, format=(string)Y42B, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, batch-size=(int)1, num-surfaces-per-frame=(int)1, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-surface-array, gpu-id=(int)0
/GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, framerate=(fraction)60/1, batch-size=(int)1, num-surfaces-per-frame=(int)1, format=(string)RGBA, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-surface-array, gpu-id=(int)0
/GstPipeline:pipeline0/GstNvDsOsd:nvdsosd0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, framerate=(fraction)60/1, batch-size=(int)1, num-surfaces-per-frame=(int)1, format=(string)RGBA, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-surface-array, gpu-id=(int)0
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, framerate=(fraction)60/1, batch-size=(int)1, num-surfaces-per-frame=(int)1, format=(string)RGBA, block-linear=(boolean)false, nvbuf-memory-type=(string)nvbuf-mem-surface-array, gpu-id=(int)0
/GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert0.GstPad:sink: caps = video/x-raw(memory:NVMM), format=(string)NV12, width=(int)1920, height=(int)1080, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, framerate=(fraction)60/1, nvbuf-memory-type=(string)nvbuf-mem-cuda-unified, gpu-id=(int)0, batch-size=(int)1, num-surfaces-per-frame=(int)1
/GstPipeline:pipeline0/GstNvInfer:nvinfer0.GstPad:sink: caps = video/x-raw(memory:NVMM), format=(string)NV12, width=(int)1920, height=(int)1080, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, framerate=(fraction)60/1, nvbuf-memory-type=(string)nvbuf-mem-cuda-unified, gpu-id=(int)0, batch-size=(int)1, num-surfaces-per-frame=(int)1
/GstPipeline:pipeline0/GstNvStreamMux:mux.GstNvStreamPad:sink_0: caps = video/x-raw(memory:NVMM), format=(string)NV12, width=(int)1920, height=(int)1080, interlace-mode=(string)progressive, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2, colorimetry=(string)bt709, framerate=(fraction)60/1, nvbuf-memory-type=(string)nvbuf-mem-surface-array, gpu-id=(int)0
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:4339: => Surface type not supported for transformation NVBUF_MEM_CUDA_UNIFIED



^Chandling interrupt.
Interrupt: Stopping pipeline ...
Execution ended after 0:03:33.788262119
Setting pipeline to NULL ...
Freeing pipeline ..

please refer to this link for how to dump pipeline graph In Python code. then compare the working and no-working pipeline.

Any links available for me to look into this pipeline graph compared to :

gst-launch-1.0 -v udpsrc port=5600 ! application/x-rtp,encoding-name=H265,payload=96 ! rtph265depay ! h265parse ! nvv4l2decoder ! mux.sink_0 nvstreammux name=mux batch-size=1 width=1920 height=1080 nvbuf-memory-type=3 ! nvinfer config-file-path=./dstest3_pgie_config.txt ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvdsosd ! nvvideoconvert ! x264enc ! h264parse ! qtmux ! filesink location=./out.mp4

daniel@daniel-nvidia:~/Work/jetson-fpv$ cat /tmp/pipeline.dot
digraph pipeline {
  rankdir=LR;
  fontname="sans";
  fontsize="10";
  labelloc=t;
  nodesep=.1;
  ranksep=.2;
  label="<GstPipeline>\npipeline0\n[0]";
  node [style="filled,rounded", shape=box, fontsize="9", fontname="sans", margin="0.0,0.0"];
  edge [labelfontsize="6", fontsize="9", fontname="monospace"];

  legend [
    pos="0,0!",
    margin="0.05,0.05",
    style="filled",
    label="Legend\lElement-States: [~] void-pending, [0] null, [-] ready, [=] paused, [>] playing\lPad-Activation: [-] none, [>] push, [<] pull\lPad-Flags: [b]locked, [f]lushing, [b]locking, [E]OS; upper-case is set\lPad-Task: [T] has started task, [t] has paused task\l",
  ];
  subgraph cluster_nv3d_sink_0x27f22c30 {
    fontname="Bitstream Vera Sans";
    fontsize="8";
    style="filled,rounded";
    color=black;
    label="GstNv3dSink\nnv3d-sink\n[0]\nparent=(GstPipeline) pipeline0\nmax-lateness=5000000\nprocessing-deadline=15000000\nwindow-x=0\nwindow-y=0\nwindow-width=1920\nwindow-height=1080";
    subgraph cluster_nv3d_sink_0x27f22c30_sink {
      label="";
      style="invis";
      nv3d_sink_0x27f22c30_sink_0x27e4c800 [color=black, fillcolor="#aaaaff", label="sink\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    fillcolor="#aaaaff";
  }

  subgraph cluster_onscreendisplay_0x27e5bfa0 {
    fontname="Bitstream Vera Sans";
    fontsize="8";
    style="filled,rounded";
    color=black;
    label="GstNvDsOsd\nonscreendisplay\n[0]\nparent=(GstPipeline) pipeline0\nclock-font=NULL\nclock-font-size=0\nclock-color=0\nprocess-mode=MODE_CPU\ndisplay-mask=FALSE";
    subgraph cluster_onscreendisplay_0x27e5bfa0_sink {
      label="";
      style="invis";
      onscreendisplay_0x27e5bfa0_sink_0x27e4c360 [color=black, fillcolor="#aaaaff", label="sink\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    subgraph cluster_onscreendisplay_0x27e5bfa0_src {
      label="";
      style="invis";
      onscreendisplay_0x27e5bfa0_src_0x27e4c5b0 [color=black, fillcolor="#ffaaaa", label="src\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    onscreendisplay_0x27e5bfa0_sink_0x27e4c360 -> onscreendisplay_0x27e5bfa0_src_0x27e4c5b0 [style="invis"];
    fillcolor="#aaffaa";
  }

  onscreendisplay_0x27e5bfa0_src_0x27e4c5b0 -> queue5_0x275a4db0_sink_0x275a6e70 [labeldistance="10", labelangle="0", label="                                                  ", taillabel="video/x-raw(memory:NVMM)\l              format: { (string)NV12, (str... }\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\l           framerate: [ 0/1, 2147483647/1 ]\l", headlabel="ANY"]
  subgraph cluster_convertor_0x27e4e2d0 {
    fontname="Bitstream Vera Sans";
    fontsize="8";
    style="filled,rounded";
    color=black;
    label="Gstnvvideoconvert\nconvertor\n[0]\nparent=(GstPipeline) pipeline0\nsrc-crop=\"0:0:0:0\"\ndest-crop=\"0:0:0:0\"";
    subgraph cluster_convertor_0x27e4e2d0_sink {
      label="";
      style="invis";
      convertor_0x27e4e2d0_sink_0x275a7c50 [color=black, fillcolor="#aaaaff", label="sink\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    subgraph cluster_convertor_0x27e4e2d0_src {
      label="";
      style="invis";
      convertor_0x27e4e2d0_src_0x27e4c110 [color=black, fillcolor="#ffaaaa", label="src\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    convertor_0x27e4e2d0_sink_0x275a7c50 -> convertor_0x27e4e2d0_src_0x27e4c110 [style="invis"];
    fillcolor="#aaffaa";
  }

  convertor_0x27e4e2d0_src_0x27e4c110 -> queue4_0x275a4ab0_sink_0x275a69d0 [labeldistance="10", labelangle="0", label="                                                  ", taillabel="video/x-raw(memory:NVMM)\l              format: { (string)I420, (str... }\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\l           framerate: [ 0/1, 2147483647/1 ]\lvideo/x-raw\l              format: { (string)I420, (str... }\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\l           framerate: [ 0/1, 2147483647/1 ]\l", headlabel="ANY"]
  subgraph cluster_nvtiler_0x27e39ea0 {
    fontname="Bitstream Vera Sans";
    fontsize="8";
    style="filled,rounded";
    color=black;
    label="GstNvMultiStreamTiler\nnvtiler\n[0]\nparent=(GstPipeline) pipeline0\nwidth=1280\nheight=720";
    subgraph cluster_nvtiler_0x27e39ea0_sink {
      label="";
      style="invis";
      nvtiler_0x27e39ea0_sink_0x275a77b0 [color=black, fillcolor="#aaaaff", label="sink\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    subgraph cluster_nvtiler_0x27e39ea0_src {
      label="";
      style="invis";
      nvtiler_0x27e39ea0_src_0x275a7a00 [color=black, fillcolor="#ffaaaa", label="src\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    nvtiler_0x27e39ea0_sink_0x275a77b0 -> nvtiler_0x27e39ea0_src_0x275a7a00 [style="invis"];
    fillcolor="#aaffaa";
  }

  nvtiler_0x27e39ea0_src_0x275a7a00 -> queue3_0x275a47b0_sink_0x275a6530 [labeldistance="10", labelangle="0", label="                                                  ", taillabel="video/x-raw(memory:NVMM)\l              format: { (string)NV12, (str... }\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\l           framerate: [ 0/1, 2147483647/1 ]\l", headlabel="ANY"]
  subgraph cluster_primary_inference_0x27e178d0 {
    fontname="Bitstream Vera Sans";
    fontsize="8";
    style="filled,rounded";
    color=black;
    label="GstNvInfer\nprimary-inference\n[0]\nparent=(GstPipeline) pipeline0\nunique-id=1\nconfig-file-path=\"dstest3_pgie_config.txt\"\nmodel-engine-file=\"/home/daniel/Work/jetson-fpv/utils/deepstream/samples/models/Primary_Detector/โ€ฆ";
    subgraph cluster_primary_inference_0x27e178d0_sink {
      label="";
      style="invis";
      primary_inference_0x27e178d0_sink_0x275a7310 [color=black, fillcolor="#aaaaff", label="sink\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    subgraph cluster_primary_inference_0x27e178d0_src {
      label="";
      style="invis";
      primary_inference_0x27e178d0_src_0x275a7560 [color=black, fillcolor="#ffaaaa", label="src\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    primary_inference_0x27e178d0_sink_0x275a7310 -> primary_inference_0x27e178d0_src_0x275a7560 [style="invis"];
    fillcolor="#aaffaa";
  }

  primary_inference_0x27e178d0_src_0x275a7560 -> queue2_0x275a44b0_sink_0x275a6090 [labeldistance="10", labelangle="0", label="                                                  ", taillabel="video/x-raw(memory:NVMM)\l              format: { (string)NV12, (str... }\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\l           framerate: [ 0/1, 2147483647/1 ]\l", headlabel="ANY"]
  subgraph cluster_queue5_0x275a4db0 {
    fontname="Bitstream Vera Sans";
    fontsize="8";
    style="filled,rounded";
    color=black;
    label="GstQueue\nqueue5\n[0]\nparent=(GstPipeline) pipeline0";
    subgraph cluster_queue5_0x275a4db0_sink {
      label="";
      style="invis";
      queue5_0x275a4db0_sink_0x275a6e70 [color=black, fillcolor="#aaaaff", label="sink\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    subgraph cluster_queue5_0x275a4db0_src {
      label="";
      style="invis";
      queue5_0x275a4db0_src_0x275a70c0 [color=black, fillcolor="#ffaaaa", label="src\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    queue5_0x275a4db0_sink_0x275a6e70 -> queue5_0x275a4db0_src_0x275a70c0 [style="invis"];
    fillcolor="#aaffaa";
  }

  queue5_0x275a4db0_src_0x275a70c0 -> nv3d_sink_0x27f22c30_sink_0x27e4c800 [labeldistance="10", labelangle="0", label="                                                  ", taillabel="ANY", headlabel="video/x-raw\l              format: { (string)RGBA, (str... }\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\l           framerate: [ 0/1, 2147483647/1 ]\lvideo/x-raw(memory:NVMM)\l              format: { (string)RGBA, (str... }\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\l           framerate: [ 0/1, 2147483647/1 ]\l"]
  subgraph cluster_queue4_0x275a4ab0 {
    fontname="Bitstream Vera Sans";
    fontsize="8";
    style="filled,rounded";
    color=black;
    label="GstQueue\nqueue4\n[0]\nparent=(GstPipeline) pipeline0";
    subgraph cluster_queue4_0x275a4ab0_sink {
      label="";
      style="invis";
      queue4_0x275a4ab0_sink_0x275a69d0 [color=black, fillcolor="#aaaaff", label="sink\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    subgraph cluster_queue4_0x275a4ab0_src {
      label="";
      style="invis";
      queue4_0x275a4ab0_src_0x275a6c20 [color=black, fillcolor="#ffaaaa", label="src\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    queue4_0x275a4ab0_sink_0x275a69d0 -> queue4_0x275a4ab0_src_0x275a6c20 [style="invis"];
    fillcolor="#aaffaa";
  }

  queue4_0x275a4ab0_src_0x275a6c20 -> onscreendisplay_0x27e5bfa0_sink_0x27e4c360 [labeldistance="10", labelangle="0", label="                                                  ", taillabel="ANY", headlabel="video/x-raw(memory:NVMM)\l              format: { (string)NV12, (str... }\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\l           framerate: [ 0/1, 2147483647/1 ]\l"]
  subgraph cluster_queue3_0x275a47b0 {
    fontname="Bitstream Vera Sans";
    fontsize="8";
    style="filled,rounded";
    color=black;
    label="GstQueue\nqueue3\n[0]\nparent=(GstPipeline) pipeline0";
    subgraph cluster_queue3_0x275a47b0_sink {
      label="";
      style="invis";
      queue3_0x275a47b0_sink_0x275a6530 [color=black, fillcolor="#aaaaff", label="sink\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    subgraph cluster_queue3_0x275a47b0_src {
      label="";
      style="invis";
      queue3_0x275a47b0_src_0x275a6780 [color=black, fillcolor="#ffaaaa", label="src\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    queue3_0x275a47b0_sink_0x275a6530 -> queue3_0x275a47b0_src_0x275a6780 [style="invis"];
    fillcolor="#aaffaa";
  }

  queue3_0x275a47b0_src_0x275a6780 -> convertor_0x27e4e2d0_sink_0x275a7c50 [labeldistance="10", labelangle="0", label="                                                  ", taillabel="ANY", headlabel="video/x-raw(memory:NVMM)\l              format: { (string)I420, (str... }\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\l           framerate: [ 0/1, 2147483647/1 ]\lvideo/x-raw\l              format: { (string)I420, (str... }\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\l           framerate: [ 0/1, 2147483647/1 ]\l"]
  subgraph cluster_queue2_0x275a44b0 {
    fontname="Bitstream Vera Sans";
    fontsize="8";
    style="filled,rounded";
    color=black;
    label="GstQueue\nqueue2\n[0]\nparent=(GstPipeline) pipeline0";
    subgraph cluster_queue2_0x275a44b0_sink {
      label="";
      style="invis";
      queue2_0x275a44b0_sink_0x275a6090 [color=black, fillcolor="#aaaaff", label="sink\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    subgraph cluster_queue2_0x275a44b0_src {
      label="";
      style="invis";
      queue2_0x275a44b0_src_0x275a62e0 [color=black, fillcolor="#ffaaaa", label="src\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    queue2_0x275a44b0_sink_0x275a6090 -> queue2_0x275a44b0_src_0x275a62e0 [style="invis"];
    fillcolor="#aaffaa";
  }

  queue2_0x275a44b0_src_0x275a62e0 -> nvtiler_0x27e39ea0_sink_0x275a77b0 [labeldistance="10", labelangle="0", label="                                                  ", taillabel="ANY", headlabel="video/x-raw(memory:NVMM)\l              format: { (string)NV12, (str... }\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\l           framerate: [ 0/1, 2147483647/1 ]\l"]
  subgraph cluster_queue1_0x275a41b0 {
    fontname="Bitstream Vera Sans";
    fontsize="8";
    style="filled,rounded";
    color=black;
    label="GstQueue\nqueue1\n[0]\nparent=(GstPipeline) pipeline0";
    subgraph cluster_queue1_0x275a41b0_sink {
      label="";
      style="invis";
      queue1_0x275a41b0_sink_0x27567b00 [color=black, fillcolor="#aaaaff", label="sink\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    subgraph cluster_queue1_0x275a41b0_src {
      label="";
      style="invis";
      queue1_0x275a41b0_src_0x27567d50 [color=black, fillcolor="#ffaaaa", label="src\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    queue1_0x275a41b0_sink_0x27567b00 -> queue1_0x275a41b0_src_0x27567d50 [style="invis"];
    fillcolor="#aaffaa";
  }

  queue1_0x275a41b0_src_0x27567d50 -> primary_inference_0x27e178d0_sink_0x275a7310 [labeldistance="10", labelangle="0", label="                                                  ", taillabel="ANY", headlabel="video/x-raw(memory:NVMM)\l              format: { (string)NV12, (str... }\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\l           framerate: [ 0/1, 2147483647/1 ]\l"]
  subgraph cluster_source_bin_00_0x27568110 {
    fontname="Bitstream Vera Sans";
    fontsize="8";
    style="filled,rounded";
    color=black;
    label="GstBin\nsource-bin-00\n[0]\nparent=(GstPipeline) pipeline0";
    subgraph cluster_source_bin_00_0x27568110_src {
      label="";
      style="invis";
      _proxypad0_0x2759e090 [color=black, fillcolor="#ffdddd", label="proxypad0\n[-][bfb]", height="0.2", style="filled,solid"];
    _proxypad0_0x2759e090 -> source_bin_00_0x27568110_src_0x2759c1b0 [style=dashed, minlen=0]
      source_bin_00_0x27568110_src_0x2759c1b0 [color=black, fillcolor="#ffdddd", label="src\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    fillcolor="#ffffff";
    subgraph cluster_nvv4l2decoder_0x27593d20 {
      fontname="Bitstream Vera Sans";
      fontsize="8";
      style="filled,rounded";
      color=black;
      label="nvv4l2decoder\nnvv4l2decoder\n[0]\nparent=(GstBin) source-bin-00\ndevice=\"/dev/nvhost-nvdec\"\ndevice-name=\"\"\ndrop-frame-interval=0\nnum-extra-surfaces=1\nmjpeg=TRUE";
      subgraph cluster_nvv4l2decoder_0x27593d20_sink {
        label="";
        style="invis";
        nvv4l2decoder_0x27593d20_sink_0x27567410 [color=black, fillcolor="#aaaaff", label="sink\n[-][bFb]", height="0.2", style="filled,solid"];
      }

      subgraph cluster_nvv4l2decoder_0x27593d20_src {
        label="";
        style="invis";
        nvv4l2decoder_0x27593d20_src_0x27567660 [color=black, fillcolor="#ffaaaa", label="src\n[-][bFb]", height="0.2", style="filled,solid"];
      }

      nvv4l2decoder_0x27593d20_sink_0x27567410 -> nvv4l2decoder_0x27593d20_src_0x27567660 [style="invis"];
      fillcolor="#aaffaa";
    }

    nvv4l2decoder_0x27593d20_src_0x27567660 -> _proxypad0_0x2759e090 [labeldistance="10", labelangle="0", label="                                                  ", taillabel="video/x-raw(memory:NVMM)\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\l           framerate: [ 0/1, 2147483647/1 ]\l", headlabel="ANY"]
    subgraph cluster_h265parse_0x2758b350 {
      fontname="Bitstream Vera Sans";
      fontsize="8";
      style="filled,rounded";
      color=black;
      label="GstH265Parse\nh265parse\n[0]\nparent=(GstBin) source-bin-00";
      subgraph cluster_h265parse_0x2758b350_sink {
        label="";
        style="invis";
        h265parse_0x2758b350_sink_0x27566f70 [color=black, fillcolor="#aaaaff", label="sink\n[-][bFb]", height="0.2", style="filled,solid"];
      }

      subgraph cluster_h265parse_0x2758b350_src {
        label="";
        style="invis";
        h265parse_0x2758b350_src_0x275671c0 [color=black, fillcolor="#ffaaaa", label="src\n[-][bFb]", height="0.2", style="filled,solid"];
      }

      h265parse_0x2758b350_sink_0x27566f70 -> h265parse_0x2758b350_src_0x275671c0 [style="invis"];
      fillcolor="#aaffaa";
    }

    h265parse_0x2758b350_src_0x275671c0 -> nvv4l2decoder_0x27593d20_sink_0x27567410 [labeldistance="10", labelangle="0", label="                                                  ", taillabel="video/x-h265\l              parsed: true\l       stream-format: { (string)hvc1, (str... }\l           alignment: { (string)au, (strin... }\l", headlabel="image/jpeg\lvideo/x-h264\l       stream-format: { (string)byte-stream }\l           alignment: { (string)au }\lvideo/x-h265\l       stream-format: { (string)byte-stream }\l           alignment: { (string)au }\lvideo/mpeg\l         mpegversion: 4\l        systemstream: false\l              parsed: true\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\lvideo/mpeg\l         mpegversion: [ 1, 2 ]\l        systemstream: false\l              parsed: true\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\lvideo/x-divx\l         divxversion: [ 4, 5 ]\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\lvideo/x-av1\lvideo/x-vp8\lvideo/x-vp9\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\l"]
    subgraph cluster_rtph265depay_0x27584150 {
      fontname="Bitstream Vera Sans";
      fontsize="8";
      style="filled,rounded";
      color=black;
      label="GstRtpH265Depay\nrtph265depay\n[0]\nparent=(GstBin) source-bin-00\nstats=application/x-rtp-depayload-stats, clock_rate=(uint)0, npt-start=(guint64)0, nptโ€ฆ";
      subgraph cluster_rtph265depay_0x27584150_sink {
        label="";
        style="invis";
        rtph265depay_0x27584150_sink_0x27566ad0 [color=black, fillcolor="#aaaaff", label="sink\n[-][bFb]", height="0.2", style="filled,solid"];
      }

      subgraph cluster_rtph265depay_0x27584150_src {
        label="";
        style="invis";
        rtph265depay_0x27584150_src_0x27566d20 [color=black, fillcolor="#ffaaaa", label="src\n[-][bFb]", height="0.2", style="filled,solid"];
      }

      rtph265depay_0x27584150_sink_0x27566ad0 -> rtph265depay_0x27584150_src_0x27566d20 [style="invis"];
      fillcolor="#aaffaa";
    }

    rtph265depay_0x27584150_src_0x27566d20 -> h265parse_0x2758b350_sink_0x27566f70 [labeldistance="10", labelangle="0", label="                                                  ", taillabel="video/x-h265\l       stream-format: hvc1\l           alignment: au\lvideo/x-h265\l       stream-format: byte-stream\l           alignment: { (string)nal, (stri... }\l", headlabel="video/x-h265\l"]
    subgraph cluster_capsfilter_0x27576190 {
      fontname="Bitstream Vera Sans";
      fontsize="8";
      style="filled,rounded";
      color=black;
      label="GstCapsFilter\ncapsfilter\n[0]\nparent=(GstBin) source-bin-00\ncaps=application/x-rtp, encoding-name=(string)H265, payload=(int)96";
      subgraph cluster_capsfilter_0x27576190_sink {
        label="";
        style="invis";
        capsfilter_0x27576190_sink_0x27566630 [color=black, fillcolor="#aaaaff", label="sink\n[-][bFb]", height="0.2", style="filled,solid"];
      }

      subgraph cluster_capsfilter_0x27576190_src {
        label="";
        style="invis";
        capsfilter_0x27576190_src_0x27566880 [color=black, fillcolor="#ffaaaa", label="src\n[-][bFb]", height="0.2", style="filled,solid"];
      }

      capsfilter_0x27576190_sink_0x27566630 -> capsfilter_0x27576190_src_0x27566880 [style="invis"];
      fillcolor="#aaffaa";
    }

    capsfilter_0x27576190_src_0x27566880 -> rtph265depay_0x27584150_sink_0x27566ad0 [labeldistance="10", labelangle="0", label="                                                  ", taillabel="ANY", headlabel="application/x-rtp\l               media: video\l          clock-rate: 90000\l       encoding-name: H265\l"]
    subgraph cluster_udpsrc_0x2756f900 {
      fontname="Bitstream Vera Sans";
      fontsize="8";
      style="filled,rounded";
      color=black;
      label="GstUDPSrc\nudpsrc\n[0]\nparent=(GstBin) source-bin-00\ndo-timestamp=TRUE\nport=5600\nuri=\"udp://0.0.0.0:5600\"";
      subgraph cluster_udpsrc_0x2756f900_src {
        label="";
        style="invis";
        udpsrc_0x2756f900_src_0x275663e0 [color=black, fillcolor="#ffaaaa", label="src\n[-][bFb]", height="0.2", style="filled,solid"];
      }

      fillcolor="#ffaaaa";
    }

    udpsrc_0x2756f900_src_0x275663e0 -> capsfilter_0x27576190_sink_0x27566630 [label="ANY"]
  }

  source_bin_00_0x27568110_src_0x2759c1b0 -> Stream_muxer_0x27564060_sink_0_0x275678b0 [labeldistance="10", labelangle="0", label="                                                  ", taillabel="ANY", headlabel="video/x-raw(memory:NVMM)\l              format: { (string)NV12, (str... }\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\l           framerate: [ 0/1, 2147483647/1 ]\l"]
  subgraph cluster_Stream_muxer_0x27564060 {
    fontname="Bitstream Vera Sans";
    fontsize="8";
    style="filled,rounded";
    color=black;
    label="GstNvStreamMux\nStream-muxer\n[0]\nparent=(GstPipeline) pipeline0\nbatch-size=1\nbatched-push-timeout=4000000\nwidth=1920\nheight=1080\nlive-source=TRUE\nframe-duration=18446744073709";
    subgraph cluster_Stream_muxer_0x27564060_sink {
      label="";
      style="invis";
      Stream_muxer_0x27564060_sink_0_0x275678b0 [color=black, fillcolor="#aaaaff", label="sink_0\n[>][bfb]", height="0.2", style="filled,dashed"];
    }

    subgraph cluster_Stream_muxer_0x27564060_src {
      label="";
      style="invis";
      Stream_muxer_0x27564060_src_0x27566190 [color=black, fillcolor="#ffaaaa", label="src\n[-][bFb]", height="0.2", style="filled,solid"];
    }

    Stream_muxer_0x27564060_sink_0_0x275678b0 -> Stream_muxer_0x27564060_src_0x27566190 [style="invis"];
    fillcolor="#aaffaa";
  }

  Stream_muxer_0x27564060_src_0x27566190 -> queue1_0x275a41b0_sink_0x27567b00 [labeldistance="10", labelangle="0", label="                                                  ", taillabel="video/x-raw(memory:NVMM)\l              format: { (string)NV12, (str... }\l               width: [ 1, 2147483647 ]\l              height: [ 1, 2147483647 ]\l           framerate: [ 0/1, 2147483647/1 ]\l", headlabel="ANY"]
}
daniel@daniel-nvidia:~/Work/jetson-fpv$

EDIT: the above code is from jetson-fpv: python3 ./utils/deepstream/deepstream.py -i rtp://@:5600, and rtp source is h265,1080P@60FPS 5600

please refer to this link for how to dump and view pipeline graph.

Woooh, that really helps. And I still got some mapping concerns, actually, I donโ€™t know how to map from the graph to Gst.parse_launch strings:






PS: the pipeline graph:

please only remove nvbuf-memory-type=3 from the pipeline in your last comment. it is because some memory type is supported by nvbuftransform. here is my test:
log-12-10.txt (3.4 KB)

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!

As I donโ€™t know the above pipe line graph, Itโ€™s quite difficult for me to make sure if itโ€™s OK or not. If you have more detailed doc or links, please send me the link. Iโ€™m interested to know why itโ€™s NOT working.

So, I dropped parse_launch approach. And using deepstream test3 instead.

the plugins which include โ€œnvโ€ in the plugin name are DeepStream plugins. they all have explanation doc. Taking nvv4l2decoder for example, please refer to this doc. you also can use โ€œgst-launch inspect nvv4l2decoderโ€ to check the properties of plugin.