TX2 Jetpack4.2.2 Deepstream4.0 can't show RTST video

Hi,
Using TX2 DS4.0 objectDetector_Yolo case, get source from RTSP ,the terminal always should **PERF: 0.00 (0.00) and no video frame,and didn’t have any warning and error.

But it’s OK using --(gst-launch-1.0 rtspsrc location=rtsp://192.168.2.119/554 ! rtph264depay ! queue ! h264parse ! nvv4l2decoder ! nvvideoconvert ! “video/x-raw(memory:NVMM),format=RGBA” ! nvegltransform ! nveglglessink sync=False)

The RTSP also can play by VLC, and this project can work well for some others camera.

sudo ./deepstream-app -c ../../../objectDetector/deepstream_app_config_yoloV3.txt
py_init 
Opening in BLOCKING MODE 

Using winsys: x11 
Creating LL OSD context new
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is ON
~~ CLOG[/dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvdcf/src/modules/NvDCF/NvDCF.cpp, NvDCF() @line 360]: !!![WARNING] Can't open config file (/home/nvidia/AI/DS7/sources/objectDetector/tracker_config.yml). Will go ahead with default values
~~ CLOG[/dvs/git/dirty/git-master_linux/deepstream/sdk/src/utils/nvdcf/src/modules/NvDCF/NvDCF.cpp, NvDCF() @line 372]: !!![WARNING] Invalid low-level config file is provided. Will go ahead with default values
[NvDCF] Initialized
Deserialize yoloLayerV3 plugin: yolo_83
Deserialize yoloLayerV3 plugin: yolo_95
Deserialize yoloLayerV3 plugin: yolo_107
cb_sourcesetup set 100 latency

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume


**PERF: FPS 0 (Avg)	
**PERF: 0.00 (0.00)	
** INFO: <bus_callback:163>: Pipeline ready

** INFO: <bus_callback:149>: Pipeline running

Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	

**PERF: FPS 0 (Avg)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
# uri=file://../../samples/streams/sample_1080p_h264.mp4
#uri=rtsp://admin:admin@192.168.2.64:554/cam/realmonitor?channel=1&subtype=2
#uri=rtsp://192.168.2.119/554
#uri=udp://192.168.2.255:23003
uri=rtsp://admin:abc123456@192.168.2.100:554/stream0
#uri=rtsp://192.168.2.12:8554/vlc
# uri=rtsp://admin:admin@192.168.1.64:554/cam/realmonitor?channel=1&subtype=2
#uri=file://../../samples/streams/sample_1080p_h264.mp4
#uri=file://../../samples/streams/v1.mp4
# uri=file://../../samples/streams/DJI.mp4

num-sources=1
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=3
#1=mp4 2=mkv  3=mpeg4
container=2
#1=h264 2=h265
codec=2
sync=0
bitrate=10000000
#1=cbr 2=vbr
# rc-mode=2
iframeinterval=30
#1=baseline 2=main 3=high
#profile=3
#output-file=file:///home/nvidia/TX2/tensorRT/deepstream_reference_apps-AI/output/out.mp4
output-file=../../../../output/out.mp4
source-id=0

[osd]
enable=1
gpu-id=0
border-width=4
text-size=18
text-color=1;0;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=16
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=model_b1_fp16.engine
labelfile-path=labels.txt
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=19
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV3.txt

[tracker]
enable=1
tracker-width=1920
tracker-height=1080
gpu-id=0
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
ll-config-file=tracker_config.yml
enable-batch-process=1


[ds-example]
enable=1
# 640 480 1920 1080
processing-width=1920 
processing-height=1080 
full-frame=0
unique-id=15
gpu-id=0

[tests]
file-loop=0

If I remove interchanger ,TX2 connect directly with camera, it has this error.

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume


**PERF: FPS 0 (Avg)	
**PERF: 0.00 (0.00)	
** INFO: <bus_callback:163>: Pipeline ready

** INFO: <bus_callback:149>: Pipeline running

**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
**PERF: 0.00 (0.00)	
ERROR from source: Could not open resource for reading and writing.
Debug info: gstrtspsrc.c(7469): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstURIDecodeBin:src_elem/GstRTSPSrc:source:
Failed to connect. (Generic error)
py_finit 
Quitting
App run failed

another camera is no video , using (gst-launch-1.0 rtspsrc location=rtsp://192.168.2.100:554/stream0 ! rtph264depay ! queue ! h264parse ! nvv4l2decoder ! nvvideoconvert ! “video/x-raw(memory:NVMM),format=RGBA” ! nvegltransform ! nveglglessink sync=False)

no video frame and not any error.

gst-launch-1.0 rtspsrc location=rtsp://192.168.2.100:554/stream0 ! rtph264depay ! queue ! h264parse ! nvv4l2decoder ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=RGBA" ! nvegltransform ! nveglglessink sync=False
Setting pipeline to PAUSED ...

Using winsys: x11 
Opening in BLOCKING MODE 
Pipeline is live and does not need PREROLL ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://192.168.2.100:554/stream0
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261

Hi,
Please check if you can run this case first.
If test-launch rtsp source is working, the issue should be specific to your IP camera.

But it’s OK for VLC and test-launch ,using the same RTSP address, How to check the issue?

Hi,
Please confirm you use DS4.0.1

We have fixed some issues of launching RTSP sources.

OK ,thanks ,I will try.