Rtsp source not displaying in Transfer Learning Toolkit

I’m trying to run my own model using the Detecnet_V2 sample on TLT (Deepstream 5.0).
I’ve already tested it on a recorded file and it’s working fine. Now I need to run it on a RTSP Hikvison camera:
This is the log of the deepstream-app -c source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx1.txt comand:

WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:03.427627997 17765      0xd805b80 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10_detector.trt
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x384x1248      
1   OUTPUT kFLOAT output_bbox/BiasAdd 8x24x78         
2   OUTPUT kFLOAT output_cov/Sigmoid 2x24x78         

ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: conv2d_bbox
0:00:03.427924011 17765      0xd805b80 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1669> [UID = 1]: Could not find output layer 'conv2d_bbox' in engine
ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: conv2d_cov/Sigmoid
0:00:03.427972462 17765      0xd805b80 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1669> [UID = 1]: Could not find output layer 'conv2d_cov/Sigmoid' in engine
0:00:03.428003119 17765      0xd805b80 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10_detector.trt
0:00:03.528977845 17765      0xd805b80 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt sucessfully

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

** INFO: <bus_callback:181>: Pipeline ready

Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
** INFO: <bus_callback:167>: Pipeline running

reference in DPB was never decoded

**PERF:  FPS 0 (Avg)	
**PERF:  14.45 (14.17)	
**PERF:  12.02 (12.38)	
**PERF:  12.01 (12.21)	
**PERF:  11.97 (12.14)	
**PERF:  11.98 (12.11)... And so on...

And so on… BUT THE VIDEO DOESN’T SHOW ON SCREEN!

I have tried several things like:

  • Disable [tiled-display] and [tracker], and changing the sink to an EglSink like this post suggests with the same result (FPS values, but no display)

  • Tried to run the gst-launch command that this post suggests :

    gst-launch-1.0 rtspsrc location=rtsp://xxxx@192.168.0.7/MPEG-4/ch1/main/av_stream ! rtph264depay ! queue ! h264parse ! nvv4l2decoder ! nvvideoconvert ! "video/x-raw(memory:NVMM),format=RGBA" ! nvegltransform ! nveglglessink sync=False 
    

with the following error:

  Setting pipeline to PAUSED ...
  Using winsys: x11 
  Opening in BLOCKING MODE 
  Pipeline is live and does not need PREROLL ...
  Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
  Progress: (open) Opening Stream
  Progress: (connect) Connecting to rtsp://admin:xxxx@192.168.0.7/MPEG-4/ch1/main/av_stream
  Progress: (open) Retrieving server options
  Progress: (open) Retrieving media info
  Progress: (request) SETUP stream 0
  Progress: (request) SETUP stream 1
  Progress: (open) Opened Stream
  Setting pipeline to PLAYING ...
  New clock: GstSystemClock
  Progress: (request) Sending PLAY request
  Progress: (request) Sending PLAY request
  Progress: (request) Sent PLAY request
  NvMMLiteOpen : Block : BlockType = 261 
  NVMEDIA: Reading vendor.tegra.display-size : status: 6 
  NvMMLiteBlockCreate : Block : BlockType = 261

  (gst-launch-1.0:19063): GStreamer-CRITICAL **: 17:01:20.895: gst_mini_object_unref: assertion 'mini_object != NULL' failed
  • Tried with a familiar, more simple gst-launch command:

    gst-launch-1.0 rtspsrc location=rtsp://xxxx@192.168.0.7/MPEG-4/ch1/main/av_stream ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! nveglglessink sync=False
    

and this time, the video displays correctly. So the camera IS working.
Any idea what I’m doing wrong?

Oh, also, this is my source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_tx1.txt config file:

  [tiled-display]
  enable=0
  rows=1
  columns=1
  width=1920
  height=1080
  gpu-id=0
  #(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
  #(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory applicable for Tesla
  #(2): nvbuf-mem-cuda-device - Allocate Device cuda memory applicable for Tesla
  #(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory applicable for Tesla
  #(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
  nvbuf-memory-type=0
  
  [source0]
  enable=0
  #Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
  type=2
  uri=file://../../streams/sample_1080p_h264.mp4
  num-sources=1
  #drop-frame-interval=2
  gpu-id=0
  # (0): memtype_device   - Memory type Device
  # (1): memtype_pinned   - Memory type Host Pinned
  # (2): memtype_unified  - Memory type Unified
  cudadec-memtype=0
  
  [source1]
  enable=1
  #Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
  type=4
  uri=rtsp://xxxx@192.168.0.7/MPEG-4/ch1/main/av_stream
  num-sources=1
  #drop-frame-interval=2
  gpu-id=0
  # (0): memtype_device   - Memory type Device
  # (1): memtype_pinned   - Memory type Host Pinned
  # (2): memtype_unified  - Memory type Unified
  cudadec-memtype=0
  
  [sink0]
  enable=1
  #Type - 1=FakeSink 2=EglSink 3=File
  type=2
  sync=0
  source-id=1
  gpu-id=0
  nvbuf-memory-type=0
  
  [sink1]
  enable=0
  type=3
  #1=mp4 2=mkv
  container=1
  #1=h264 2=h265
  codec=1
  sync=0
  #iframeinterval=10
  bitrate=2000000
  output-file=out.mp4
  source-id=0
  
  [sink2]
  enable=0
  #Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
  type=4
  #1=h264 2=h265
  codec=1
  sync=0
  bitrate=4000000
  # set below properties in case of RTSPStreaming
  rtsp-port=8554
  udp-port=5400
  
  [osd]
  enable=1
  gpu-id=0
  border-width=1
  text-size=15
  font=Serif
  show-clock=0
  nvbuf-memory-type=0
  
  [streammux]
  gpu-id=0
  ##Boolean property to inform muxer that sources are live
  live-source=1
  batch-size=1
  ##time out in usec, to wait after the first buffer is available
  ##to push the batch even if the complete batch is not formed
  batched-push-timeout=40000
  ## Set muxer output width and height
  width=1920
  height=1080
  ##Enable to maintain aspect ratio wrt source, and allow black borders, works
  ##along with width, height properties
  enable-padding=0
  nvbuf-memory-type=0
  
  # config-file property is mandatory for any gie section.
  # Other properties are optional and if set will override the properties set in
  # the infer config file.
  [primary-gie]
  enable=1
  gpu-id=0
  model-engine-file=../../models/Primary_Detector/resnet10_detector.trt
  batch-size=1
  #Required by the app for OSD, not a plugin property
  interval=0
  gie-unique-id=1
  nvbuf-memory-type=0
  config-file=config_infer_primary.txt
  
  [tracker]
  enable=0
  tracker-width=640
  tracker-height=368
  #ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
  #ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
  ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
  #ll-config-file required for DCF/IOU only
  #ll-config-file=tracker_config.yml
  #ll-config-file=iou_config.txt
  gpu-id=0
  #enable-batch-process applicable to DCF only
  enable-batch-process=1

According to your description, the tlt model works well with a recorded file or rtsp file. But there is no displaying when run rtsp source. So it is not an issue of TLT.
Please search related information about “rtsp” in Deepstream forum or create a new topic in DS forum.