Problem in RTSP sink options

Please provide complete information as applicable to your setup.

• Hardware Platform: Jetson Orin Nano
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1.1
• TensorRT Version 8.5.2.2
• Issue Type( questions, new requirements, bugs) Problem/Question
• How to reproduce the issue? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I’m constructing a pipeline that retrieves input from an RTSP camera, performs processing using the PeopleNet model, and subsequently showcases the processed frames on a display. for that, I use the deepstream-app -c command. This is the pipeline config file:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1

[tiled-display]
enable=0
rows=1
columns=1
width=640
height=360

gpu-id=0

[source1]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI 
type=2
num-sources=1
uri=file:///home/edgekit/Documents/edgekit-people-counting-demo/graphics/input_resized.mp4
gpu-id=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
#camera-width=1920
#camera-height=1080
uri=rtsp://192.168.178.139:554/stream0
#video-format=RGBA

[streammux]
gpu-id=0
batch-size=1
batched-push-timeout=40000
live-source=1
## Set muxer output width and height
width=960
height=540

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=1
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
# process mode 0=CPU 1=GPU
process-mode=1
border-width=2
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
display-bbox=1
display-mask=0
font=Arial

[videoconvert]
enable=0
src_crop = "0:0:1920:1080"  
dest_crop = "0:270:960:540" 

[primary-gie]
enable=1
#(0): nvinfer; (1): nvinferserver
plugin-type=0
gpu-id=0
# Modify as necessary
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=1
config-file=./config_infer_primary_peoplenet.txt
#config-file=triton/config_infer_primary_peoplenet.txt
#config-file=triton-grpc/config_infer_primary_peoplenet.txt


[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
codec=1
#encoder type 0=Hardware 1=Software
enc-type=1
sync=0
bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
#udp-buffer-size= 1000
output-file=../../graphics/output_ds6.mp4
#source-id=0 # Enabling this prevents from creating the output file

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=1
sync=0
bitrate=5120000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
# set below properties in case of RTSPStreaming
rtsp-port=554
udp-port=8091


[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_tracker.so #Did not work with any .yml configuration file
#ll-config-file required to set different tracker types
#ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_IOU.yml            #Each detected person has a new id  
ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml      #Track the person if he/she gets out of frame and then go back 
#ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml #Doesn't work 
#ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDeepSORT.yml     #Doesn't work
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1

[nvds-analytics]
enable=0
config-file=nvdsanalytics_config.txt


[ds-example]
enable=1
processing-width=960
processing-height=540
full-frame=0
gpu-id=0
unique-id=3
blur-objects=1

And this is the output log:

WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:05.144446701 215711 0xaaab0889d550 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/home/edgekit/Documents/edgekit-people-counting-demo/deepstream_demo/models/deployable_quantized_v2.5/resnet34_peoplenet_int8.etlt.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 12x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 3x34x60         

ERROR: [TRT]: 3: Cannot find binding of given name: output_bbox/BiasAdd:0
0:00:05.348068069 215711 0xaaab0889d550 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1876> [UID = 1]: Could not find output layer 'output_bbox/BiasAdd:0' in engine
ERROR: [TRT]: 3: Cannot find binding of given name: output_cov/Sigmoid:0
0:00:05.348123430 215711 0xaaab0889d550 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1876> [UID = 1]: Could not find output layer 'output_cov/Sigmoid:0' in engine
0:00:05.348146662 215711 0xaaab0889d550 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /home/edgekit/Documents/edgekit-people-counting-demo/deepstream_demo/models/deployable_quantized_v2.5/resnet34_peoplenet_int8.etlt.engine
0:00:05.357061947 215711 0xaaab0889d550 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/edgekit/Documents/edgekit-people-counting-demo/deepstream_demo/config/./config_infer_primary_peoplenet.txt sucessfully

Runtime commands:
        h: Print this help
        q: Quit

        p: Pause
        r: Resume


**PERF:  FPS 0 (Avg)
**PERF:  0.00 (0.00)
** INFO: <bus_callback:239>: Pipeline ready

** ERROR: <cb_newpad3:510>: Failed to link depay loader to rtsp src
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
** INFO: <bus_callback:225>: Pipeline running

**PERF:  26.23 (23.40)
**PERF:  25.45 (25.52)
**PERF:  24.67 (24.83)
**PERF:  25.38 (25.18)
**PERF:  24.38 (24.90)
**PERF:  25.51 (24.92)

Although the pipeline is working, there is no stream window popping-up and the output mp4 file is not correctly encoded (though it contains information).

Thank you in advance!

This error log says your rtsp source cannot read data properly.
You can try adding select-rtp-protocol in the source group. just likeselect-rtp-protocol=0/4
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_ref_app_deepstream.html

Another question, using soft encoder will consume a lot of CPU. Are you sure you want to use soft encoder?

Thank you for your first response!

  1. I tried select-rtp-protocol=0/4 for both possibilities and the error message is always there. But I think that there is some default config (IP address) that the pipeline tries to connect to and then switches to the actual configuration. (This error doesn’t affect the outcome of the pipeline.)

  2. I am using enc-type=1 because Jetson Orin Nano is not equipped encoder hardware engine.

  3. It turns out that this configuration works fine but there was a problem with OpenGL since I am connecting to the Jetson via SSH.

Thank you,
Ahmed

If you do not have a monitor plugged in, you can set enable-egl-sink to 0 first.

Has your current problem been solved?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.