Pipeline works using gst-launch but same pipeline fails when using gst python

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) → GPU (t4)
• DeepStream Version → 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version → 7.2 , patch 2 build 3
• NVIDIA GPU Driver Version (valid for GPU only) -->> 450.142.00
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I have a an rtsp pipeline which runs fine when i use gst-launch command

gst-launch-1.0 -e rtspsrc location=rtsp:// user-id=admin user-pw=<pass> ! rtph265depay ! h265parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=config_infer_primary_peoplenet.txt ! nvvideoconvert ! nvdsosd ! queue ! nvvideoconvert ! "video/x-raw, format=I420" ! avenc_mpeg4 ! mpeg4videoparse ! qtmux ! filesink location = out_rtsp.mp4

But when i try to build the same pipeline using python api(file attached below), the code does not work.

Below is the console output:

Creating Pipeline

Creating Source

Creating h265depay

Creating H265Parser

Creating Decoder

Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.
Creating capsfilter

Creating Encoder

Creating Code Parser

Creating Container

Creating Sink

Playing file
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

0:00:01.630087700 571 0x1987180 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.1/samples/models/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_fp16.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 12x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 3x34x60

0:00:01.630179866 571 0x1987180 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.1/samples/models/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_fp16.engine
0:00:01.631310904 571 0x1987180 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_peoplenet.txt sucessfully
Warning: gst-resource-error-quark: Could not read from resource. (9): gstrtspsrc.c(5427): gst_rtspsrc_reconnect (): /GstPipeline:pipeline0/GstRTSPSrc:rtsp-source:
Could not receive any UDP packets for 5.0000 seconds, maybe your firewall is blocking it. Retrying using a tcp connection.
Error: gst-stream-error-quark: Internal data stream error. (1): gstrtspsrc.c(5653): gst_rtspsrc_loop (): /GstPipeline:pipeline0/GstRTSPSrc:rtsp-source:
streaming stopped, reason not-linked (-1)

rtsp_single.py (11.0 KB)

Sorry for the late response, we will investigate to do the update soon.

1 Like

I have solved this… There was an issue with the password characters. it contained an @ in it which was causing trouble i guess.