Issues with RTSP sink using deepstream app using -t

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU T4
• DeepStream Version 5.0.1
• JetPack Version (valid for Jetson only)
• TensorRT Version TRT 7.0.0
• NVIDIA GPU Driver Version (valid for GPU only) 455.45.01
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

################################################################################
#
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=60
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=3
columns=3
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp:
gpu-id=0
rtsp-reconnect-interval-sec=60
cudadec-memtype=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://
rtsp-reconnect-interval-sec=60
gpu-id=0
cudadec-memtype=0


[source2]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://
rtsp-reconnect-interval-sec=60
gpu-id=0
cudadec-memtype=0

[source3]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp:
rtsp-reconnect-interval-sec=600000
gpu-id=0
cudadec-memtype=0


[source4]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://
rtsp-reconnect-interval-sec=60
gpu-id=0
cudadec-memtype=0


[source5]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://
rtsp-reconnect-interval-sec=60
gpu-id=0
cudadec-memtype=0

[source13]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://
rtsp-reconnect-interval-sec=60
gpu-id=0
cudadec-memtype=0


[source14]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://
rtsp-reconnect-interval-sec=60
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
#1=mp4 2=mkv
#container=1
#1=h264 2=h265
#codec=1
#output-file=yolov4.mp4

#DEPLOYMENT WITH AZURE
[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=naming_conv_iotedge.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
msg-conv-payload-type=1
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream/lib/libnvds_azure_edge_proto.so
#msg-broker-conn-str=localhost;5672;guest;guest
topic=mytopic

#RTSP output to see analytics
[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=4
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
codec=2
bitrate=4000000
iframeinterval=10
rtsp-port=8554
profile=0
udp-buffer-size=100000

[osd]
enable=1
gpu-id=0
border-width=2
text-size=12
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=2
batch-size=8
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
width=1280
height=720
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
labelfile-path=labels.txt
batch-size=8
force-implicit-batch-dim=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=2
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV4.txt

[tracker]
enable=1
tracker-width=640
tracker-height=480
gpu-id=0
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_nvdcf.so
ll-config-file=tracker_config.yml
enable-batch-process=1

[nvds-analytics]
enable=1
config-file=analytics_line.txt

I am using deepstream app with the above settings and when its run without -t (to display info on rtsp) it works fine but once I add that option it keeps flickering grey and its stops and starts on the rtsp stream using t4. I have tried it using another site with rtx2700 and it is fine, no issues. Is there something I need to do with T4? It shows the following on nvidia-smi:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.45.01    Driver Version: 455.45.01    CUDA Version: 11.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            On   | 00000000:3B:00.0 Off |                    0 |
| N/A   61C    P0    42W /  70W |   2794MiB / 15109MiB |     64%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1716      G   /usr/lib/xorg/Xorg                  4MiB |
|    0   N/A  N/A      3020      G   /usr/lib/xorg/Xorg                  4MiB |
|    0   N/A  N/A     25595      C   ...est/src/ddi-labs-hydrogen     2654MiB 

Note downloaded CUDA 10.2 so ignore 11.1 shwon in nvidia-smi.

Doesn’t appear to be at the limit or dropping frames so not sure.

The “-t” option will not impact the performance of deepstream-app.

Can you set encoder to hardware encoder explicitly with “enc-type” in [sink2]? Can you make sure that port 5000(the default udp port for deepstream if you do not set upd port explicitly) is only used by one deepstream application for one stream streaming?

I can not reproduce the problem with my T4 server.

1 Like

Specifying the default udp port seems to have worked, thanks!