Use deepstream app to set RTSP server and run with error

hardware:

Software part of jetson-stats 4.3.1 - (c) 2024, Raffaello Bonghi
Model: NVIDIA Orin Nano Developer Kit - Jetpack 5.1.3 [L4T 35.5.0]
NV Power Mode[0]: 10W
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
 - P-Number: p3767-0004
 - Module: NVIDIA Jetson Orin Nano (4GB ram)
Platform:
 - Distribution: Ubuntu 20.04 focal
 - Release: 5.10.192-tegra
jtop:
 - Version: 4.3.1
 - Service: Active
Libraries:
 - CUDA: 11.4.315
 - cuDNN: 8.6.0.166
 - TensorRT: 8.5.2.2
 - VPI: 2.4.8
 - Vulkan: 1.3.204
 - OpenCV: 4.5.4 - with CUDA: NO
nvidia@nvidia-desktop:~$ deepstream-app --version
deepstream-app version 6.3.0
DeepStreamSDK 6.3.0

I need to push the code stream data using RTSP,I use deepstream for process debugging, but there are always errors at runtime:

nvidia@nvidia-desktop:~/app/algorithm$ deepstream-app -c test_push_rtsp.txt 
** ERROR: <create_udpsink_bin:722>: create_udpsink_bin failed
** ERROR: <create_sink_bin:831>: create_sink_bin failed
** ERROR: <create_processing_instance:956>: create_processing_instance failed
** ERROR: <create_pipeline:1576>: create_pipeline failed
** ERROR: <main:697>: Failed to create pipeline
Quitting

App run failed

Here is my config file:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[source0]
enable=1
type=2
uri=rtsp://192.168.0.119:554
num-sources=1
gpu-id=0
nvbuf-memory-type=0

[streammux]
gpu-id=0
batch-size=1
width=1920
height=1080
batched-push-timeout=40000

[primary-gie]
enable=0
gpu-id=0
config-file=/home/nvidia/app/algorithm/config_infer_primary_yolo11.txt
batch-size=1

[sink0]
enable=0
# ** type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=3
# ** (for FileSink) container 1=mp4 2=mkv;  codec 1=h264 2=h265
container=1
codec=1
# ** encoder type 0=Hardware 1=Software
enc-type=0
sync=0
#iframeinterval=10
bitrate=2000000
# ** H264 Profile - 0=Baseline 2=Main 4=High
# ** H265 Profile - 0=Main 1=Main10
# ** set profile only for hw encoder, sw encoder selects profile based on sw-preset
profile=0
output-file=/home/nvidia/Code/DeepStream-Yolo/test_yolo11/ds_test2.mp4
source-id=0

[sink1]
enable=1
type=4
sync=0
gpu-id=0
bitrate=2000000
rtsp-port=8553
udp-port=5400
nvbuf-memory-type=0

I already installed GStreamer and dependent components as NVIDIA documentation requires. However, an error will still occur when setting RTSP server.
I tried to reinstall deepstream, but the error still exists.

Orin nano does not support hardware encoding. please set enc-type=1.

Hi,Fanzh:
sink0 does not work, I set “enable=0”, Only sink1 was enabled.

Similarily, please add enc-type=1 in [sink1]. please refer to \opt\nvidia\deepstream\deepstream\samples\configs\deepstream-app\source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.yml

Hi, fanzh:
After I refer to the source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt and modify it, the RTSP server has worked.
The delay from RTSP pulling the video stream and pushing the video stream again exceeds 2S. How can I set it to reduce the delay? If I use the GStreamer command, I can add parameters to the component, but in the configuration file of deepstream, I don’t know how to do it.

  1. please refer to this link for performance improvement.
  2. do you mean playing the output rtsp have more than 2S delay? if using nv3dsink, is there still 2S delay? wondering if it is related to player.

Hi,fanzh:
My testing precess is as follows:
1.get rtsp media stream
2.pull media stream by rtsp server
3.Use gstreamer to obtain the media stream of step 2.

Step 1/2 are set configuration file of deepstream, In step 3, I used gstreramer to get media stream and display it for compare the delay.

could you provide more details mentioned in my last comments?

  1. if using nv3dsink instead of using rtsp out, is there still 2S delay? wondering if it is related to rtsp out and player.
  2. after using the methods in the link mentioned above, is there any improvement when using nv3dsink?

Hi,fanzh:

I switched to nv3dsink test and the delay did not decrease.
Here is my config file:

################################################################################
# Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1920
height=1080

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI
type=2
uri=rtsp://192.168.0.119:554
#type=5
num-sources=1
gpu-id=0
cudadec-memtype=0
#camera-width=1280
#camera-height=720
#camera-fps-n=25
#camera-fps-d=1

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink/nv3dsink(Jetson only) 3=File 4=RTSPStreaming 5=nvdrmvideosink
type=4
sync=1
width=1920
height=1080
#plane-id=1
#source-id=0
enc-type=1
codec=1
bitrate=2000000
rtsp-port=8555
udp-port=8555
iframeinterval=25

[osd]
enable=1
border-width=2
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0

[streammux]
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1280
height=720
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=0
model-engine-file=../../models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
config-file=config_infer_primary.txt

[tests]
file-loop=0

Here is the command to get rtsp stream:

gst-launch-1.0 rtspsrc location=rtsp://192.168.0.13:8555/ds-test latency=100 ! rtph264depay ! nvv4l2decoder ! nv3dsink

This is the log after running deepstream-app command:

nvidia@nvidia-desktop:~/app/algorithm$ deepstream-app -c test_qy.txt

 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8555/ds-test ***


Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:239>: Pipeline ready

** INFO: <bus_callback:225>: Pipeline running

Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NvMMLiteBlockCreate : Block : BlockType = 261 
** INFO: <bus_callback:225>: Pipeline running


**PERF:  FPS 0 (Avg)	
**PERF:  31.40 (31.34)	
**PERF:  30.03 (30.61)

To narrow down this issue, I means setting nv3dsink in [sink0]. wondering if it is related to rtsp out and Gstreamer playing.

hi, fanzh:
I changed sink0 to:

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink/nv3dsink(Jetson only) 3=File 4=RTSPStreaming 5=nvdrmvideosink
type=2
sync=1
width=1920
height=1080

The test delay is only 300~350ms.
I also have an NX2 NVIDIA development board here. Change the enc-type=0 in sink0, and the test delay is only 400ms.

from your test. the issue is related to rtsp out and Gstreamer playing. let’s get back to the original issue.
what kind of delay? will it take too much time to see the video when using Gstreamer play? if so, please refer to this topic The output rtsp is too slow to play through the tool. Or after seeing the output video when using Gstreamer playing, the video still have 2s delay compare with the origin rtsp source?

hi,fanzh:
orin nano not support nvv4l2h264enc, only x264env can be used encoding.This is stated in the official NVIDIA document.

root@nvidia-desktop:~# gst-inspect-1.0 nvv4l2h264enc
No such element or plugin 'nvv4l2h264enc'

In the sink0 configuration, I set iframeinterval=25, but this does not help optimize the delay.

The problem in the link is very similar to the phenomenon I encountered. After running for a period of time, the delay of testing RTSP has exceeded 20s.

if using software encoding, please refer to this topic.

Hi, fanzh:
I modified the code of deepstream_sink_bin.c according to the link you sent. After compiling deepstream_app successfully, run my own compiled deepstream_app, The delay is still quite high, more than 10s.
I modified the create_udpsink_bin() in the deepstream_sink_bin.c file as follows:

if (config->enc_type == NV_DS_ENCODER_TYPE_SW) {
    //bitrate is in kbits/sec for software encoder x264enc and x265enc
    g_object_set (G_OBJECT (bin->encoder), "bitrate", config->bitrate / 1000,
        NULL);
    /*yhy add */
    g_object_set (G_OBJECT (bin->encoder), "key-int-max", 60, NULL);
  } else {
    g_object_set (G_OBJECT (bin->encoder), "bitrate", config->bitrate, NULL);
    g_object_set (G_OBJECT (bin->encoder), "profile", config->profile, NULL);
    g_object_set (G_OBJECT (bin->encoder), "iframeinterval",
        config->iframeinterval, NULL);
  }

I add g_object_set (G_OBJECT (bin->encoder), “key-int-max”, 60, NULL);

When running deepstream-app,check top command and find that the CPU consumption is very high:

top - 17:17:47 up 3 min,  1 user,  load average: 5.10, 1.53, 0.55
Tasks: 327 total,   1 running, 326 sleeping,   0 stopped,   0 zombie
%Cpu0  : 15.5 us, 16.4 sy, 60.5 ni,  1.6 id,  1.3 wa,  2.6 hi,  2.0 si,  0.0 st
%Cpu1  : 11.4 us,  4.7 sy, 81.3 ni,  1.3 id,  0.7 wa,  0.7 hi,  0.0 si,  0.0 st
%Cpu2  : 14.4 us,  3.7 sy, 78.3 ni,  2.0 id,  0.7 wa,  0.7 hi,  0.3 si,  0.0 st
%Cpu3  : 34.1 us,  3.0 sy, 59.2 ni,  1.0 id,  1.7 wa,  0.7 hi,  0.3 si,  0.0 st
%Cpu4  :  5.7 us,  2.7 sy, 82.6 ni,  2.3 id,  5.4 wa,  0.3 hi,  1.0 si,  0.0 st
%Cpu5  :  5.0 us,  3.7 sy, 83.1 ni,  5.0 id,  2.0 wa,  0.7 hi,  0.7 si,  0.0 st
MiB Mem :   3425.1 total,    105.4 free,   2467.8 used,    852.0 buff/cache
MiB Swap:   1712.6 total,   1581.8 free,    130.8 used.    677.5 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                                             
   4293 nvidia    20   0 7369784 913520 110584 S 514.2  26.0   0:57.08 deepstream-app 

x264enc is software encoding. it will cost more CPU resource.

  1. you may add g_object_set (G_OBJECT (bin->encoder), “speed-preset”, 1, NULL), which will accelerate encoding speed.
  2. can you use the following cmd to play at the same machine running the deepstream app? wondering if the delay is related to the network. sync=0 means playing as soon as possible.
gst-launch-1.0 rtspsrc location=rtsp://192.168.0.13:8555/ds-test latency=100 ! rtph264depay ! nvv4l2decoder ! nv3dsink sync=0

Hi, fanzh:
The speed-preset setting takes effect. Now the RTSP streaming delay is about 500ms, which is greatly improved compared with the previous time.

cpu info:

Tasks: 326 total,   2 running, 324 sleeping,   0 stopped,   0 zombie
%Cpu0  : 10.9 us, 10.9 sy,  8.7 ni, 65.2 id,  0.0 wa,  2.2 hi,  2.2 si,  0.0 st
%Cpu1  : 10.9 us, 10.9 sy, 17.4 ni, 60.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu2  : 14.9 us,  8.5 sy, 10.6 ni, 66.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu3  :  8.5 us, 25.5 sy, 12.8 ni, 53.2 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu4  : 17.4 us,  4.3 sy,  0.0 ni, 78.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu5  : 22.2 us,  2.2 sy,  4.4 ni, 68.9 id,  0.0 wa,  0.0 hi,  2.2 si,  0.0 st
MiB Mem :   3425.5 total,     83.3 free,   2761.0 used,    581.1 buff/cache
MiB Swap:   1712.7 total,    808.2 free,    904.5 used.    370.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                                             
   7110 nvidia    20   0 7025920 373680 116092 S  84.8  10.7   3:24.10 deepstream-app

There are two questions to ask:
1.can I set the URL of RTSP server in the config file? Orin nano has four network nodes. I want to specify the corresponding nodes for streaming.
2. our goal is to optimize the delay to within 300ms. If it cannot be done on Orin nano, can Orin NX be used?

  1. deepstream-app is opensource. you can modify start_rtsp_streaming of \opt\nvidia\deepstream\deepstream\sources\apps\apps-common\src\deepstream_sink_bin.c to customize.
  2. from the performance table, Orin NX supports hardware encoding and has a better AI Performance, which is helpful to reduce delay.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.