[TX2] deepstream 4.0 run failed with rtsp source

I have JetPack4.2.1 flashed on my TX2. Deepstream 4.0 is also installed. I am trying to run the test below(rtsp stream comes from unv camera):

deepstream-app -c source12_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx2.txt

The rtsp stream can be accessed using gst-launch-1.0:

gst-launch-1.0 playbin uri=rtsp://192.168.28.143

What I got is: App run failed

nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-4.0/samples/configs/deepstream-app$ deepstream-app -c source12_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx2.txt
Creating LL OSD context new
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:163>: Pipeline ready

**PERF: FPS 0 (Avg)	
**PERF: 0.00 (0.00)	
** ERROR: <cb_newpad3:396>: Failed to link depay loader to rtsp src
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
** INFO: <bus_callback:149>: Pipeline running

Creating LL OSD context new
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
reference in DPB was never decoded
gstnvtracker: NvBufSurfTransform failed with error -2 while converting buffergstnvtracker: Failed to convert input batch.
0:00:15.624922121 29812     0x3c35b850 WARN                 nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop:<primary_gie_classifier> error: Internal data stream error.
0:00:15.625098151 29812     0x3c35b850 WARN                 nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop:<primary_gie_classifier> error: streaming stopped, reason error (-5)
ERROR from tracking_tracker: Failed to submit input to tracker
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvtracker2/gstnvtracker.cpp(511): gst_nv_tracker_submit_input_buffer (): /GstPipeline:pipeline/GstBin:tracking_bin/GstNvTracker:tracking_tracker
ERROR from primary_gie_classifier: Internal data stream error.
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1830): gst_nvinfer_output_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier:
streaming stopped, reason error (-5)
Quitting
gstnvtracker: NvBufSurfTransform failed with error -2 while converting buffergstnvtracker: Failed to convert input batch.
ERROR from tracking_tracker: Failed to submit input to tracker
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvtracker2/gstnvtracker.cpp(511): gst_nv_tracker_submit_input_buffer (): /GstPipeline:pipeline/GstBin:tracking_bin/GstNvTracker:tracking_tracker
App run failed
nvidia@nvidia-desktop:/opt/nvidia/deepstream/deepstream-4.0/samples/configs/deepstream-app$

Here is the config file:

# Copyright (c) 2019 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=4
columns=3
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
#uri=file://../../streams/sample_1080p_h264.mp4
uri=rtsp://192.168.28.143
num-sources=11
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=5
sync=1
source-id=0
gpu-id=0
qos=0
nvbuf-memory-type=0
overlay-id=1

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=12
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=../../models/Primary_Detector_Nano/resnet10.caffemodel_b12_fp16.engine
batch-size=12
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=4
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_nano.txt

[tracker]
enable=1
tracker-width=480
tracker-height=272
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
#ll-config-file required for IOU only
#ll-config-file=iou_config.txt
gpu-id=0

[tests]
file-loop=0

I noticed that there are similar cases in this forum:
https://devtalk.nvidia.com/default/topic/1058086/deepstream-sdk/how-to-run-rtp-camera-in-deepstream-on-nano/post/5366807/#5366807
https://devtalk.nvidia.com/default/topic/1058356/deepstream-sdk/-jetson-tx2-help-rtsp-camera-and-usb-camera-did-not-work-in-deepstream4-0/

But both without any solution. Any suggestion?

Hi,
Please refer to
https://devtalk.nvidia.com/default/topic/1058086/deepstream-sdk/how-to-run-rtp-camera-in-deepstream-on-nano/post/5366807/#5366807

By default the config file runs 12 video decoding sources. Please try one rtsp source.
By the way, please share brand of your IP camera. Sony, DaHua, HikVision, or other brands?

I disabled tiled-display and tracker(as mentioned by others) then the app runs successfully.
I mentioned the camera brand in the original post actually.It’s UNV camera. Maybe not so famous.
I am looking foward to you guys to fix this issue so that we can enable tiled-display and tracker.

As found in: https://docs.nvidia.com/metropolis/deepstream/4.0/DeepStream_Plugin_Manual.pdf

For RTSP streaming input, in the configuration file’s [streammux]
group, set live-source=1. Also make sure that all [sink#] groups have the
sync property set to 0.

This works for me.

Please apply the prebuilt libs and try again:
https://devtalk.nvidia.com/default/topic/1058086/deepstream-sdk/how-to-run-rtp-camera-in-deepstream-on-nano/post/5369676/#5369676

hello.
I used deepstream-app to input RTSP stream program on TX2, but RTSP will freeze and crash after 20 seconds. Do you know why?

I will try to help you later. I have developed some application based on deepstream and gstreamer since my last post. Stay tuned.

I haven’t got time to do test on TX2.I test it on x86.It’s ok with below source setting:

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://192.168.2.234:8554/6_20190612_0800.avi
num-sources=1
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

Thanks luisyin.
I transplanted the deepstream-app program to ros. I do n’t know if it is a problem with the deepstream mechanism. There is no problem running RTSP in the deepstream source code, but running RTSP in ros-deepstream will crash after 20 seconds. Running the video file doesn’t go wrong, I’m surprised.

I see. I have no experience on ROS yet. Maybe you should build a gstreamer pipeline with rtsp as source on ROS, for debug purpose. By the way, what kind of rtsp source you use? I suggest you try rtsp streams from ip camera and ffserver.

I use RTSP stream from HIKVISION. I solved the problem that RTSP would be stuck. Because I didn’t know Gstreamer, I used GST_MAP_READWRITE when gst_buffer_map, which caused an error, and it should be GST_MAP_READ. A very low-level error.