RTSP No detection object

Hi evertbody

Environment:
Jetson nano
Jetpack: 4.3
Deepstream 4.02
camera: LCAM0336OD

Issues:
DeepStream fails when detecting objects with an rtsp stream. I have the video but the detection. It works well with video files. But not in RTSP.

run:$deepstream-app -c rtsp_infer-resnet_tracker_fp16_nano.txt
Using winsys: x11
Creating LL OSD context new
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF: FPS 0 (Avg)	
**PERF: 0.00 (0.00)	
** INFO: <bus_callback:189>: Pipeline ready

Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
** INFO: <bus_callback:175>: Pipeline running

Creating LL OSD context new
KLT Tracker Init
**PERF: 37.60 (37.60)	
**PERF: 33.78 (34.53)	
**PERF: 30.00 (32.51)	
**PERF: 29.99 (31.73)	
**PERF: 30.00 (31.32)	
**PERF: 30.00 (31.07)	
**PERF: 30.00 (30.90)

but no detection:
$cat rtsp_infer-resnet_tracker_fp16_nano.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1024
height=768
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://192.168.100.224/channel1
num-sources=8
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0
qos=0
nvbuf-memory-type=0
overlay-id=1

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=../../models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine
batch-size=8
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=4
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_nano.txt

[tracker]
enable=1
tracker-width=480
tracker-height=272
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
#ll-config-file required for IOU only
#ll-config-file=iou_config.txt
gpu-id=0

[tests]
file-loop=0

I tested implement a new libs in this post RTSP camera access frame issue
Not work
and this post
RTSP stream not loading with deepstream-sdk 4.0 on jetson nano

i tested with yolov3-tiny, not work

/prebuild.sh
export CUDA_VER=10.1

$cat deepstream_app_config_yoloV3_tiny.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1024
height=768
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
#uri=file://../../samples/streams/sample_1080p_h264.mp4
uri=rtsp://192.168.100.224/channel1
num-sources=1
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
#model-engine-file=model_b1_fp32.engine
labelfile-path=labels.txt
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV3_tiny.txt

[tests]
file-loop=0

$ cat config_infer_primary_yoloV3_tiny.txt

[property]
gpu-id=0
net-scale-factor=1
#0=RGB, 1=BGR, 2=GRAY
model-color-format=0
custom-network-config=yolov3-tiny.cfg
model-file=yolov3-tiny.weights
model-engine-file=model_b1_fp16.engine
#model-engine-file=model_b1_fp32.engine
labelfile-path=labels.txt
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=4
interval=1
gie-unique-id=1
is-classifier=0
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3Tiny
custom-lib- 
path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so

$ cat yolov3-tiny.cfg
[net]
# Testing
batch=1
subdivisions=1
# Training
# batch=64
# subdivisions=2
width=416
height=416
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1

learning_rate=0.001
burn_in=1000
max_batches = 500200
policy=steps
steps=400000,450000
scales=.1,.1

[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=1

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

###########

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=255
activation=linear



[yolo]
mask = 3,4,5
anchors = 10,14,  23,27,  37,58,  81,82,  135,169,  344,319
classes=80
num=6
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1

[route]
layers = -4

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[upsample]
stride=2

[route]
layers = -1, 8

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=255
activation=linear

[yolo]
mask = 0,1,2
anchors = 10,14,  23,27,  37,58,  81,82,  135,169,  344,319
classes=4
num=6
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1

run: $deepstream-app -c deepstream_app_config_yoloV3_tiny.txt

    Using winsys: x11 
    Creating LL OSD context new
    0:00:01.039496564  9413     0x3cbab8a0 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
    Loading pre-trained weights...
    Loading complete!
    Total Number of weights read : 8858734
          layer               inp_size            out_size       weightPtr
    (1)   conv-bn-leaky     3 x 416 x 416      16 x 416 x 416    496   
    (2)   maxpool          16 x 416 x 416      16 x 208 x 208    496   
    (3)   conv-bn-leaky    16 x 208 x 208      32 x 208 x 208    5232  
    (4)   maxpool          32 x 208 x 208      32 x 104 x 104    5232  
    (5)   conv-bn-leaky    32 x 104 x 104      64 x 104 x 104    23920 
    (6)   maxpool          64 x 104 x 104      64 x  52 x  52    23920 
    (7)   conv-bn-leaky    64 x  52 x  52     128 x  52 x  52    98160 
    (8)   maxpool         128 x  52 x  52     128 x  26 x  26    98160 
    (9)   conv-bn-leaky   128 x  26 x  26     256 x  26 x  26    394096
    (10)  maxpool         256 x  26 x  26     256 x  13 x  13    394096
    (11)  conv-bn-leaky   256 x  13 x  13     512 x  13 x  13    1575792
    (12)  maxpool         512 x  13 x  13     512 x  13 x  13    1575792
    (13)  conv-bn-leaky   512 x  13 x  13    1024 x  13 x  13    6298480
    (14)  conv-bn-leaky  1024 x  13 x  13     256 x  13 x  13    6561648
    (15)  conv-bn-leaky   256 x  13 x  13     512 x  13 x  13    7743344
    (16)  conv-linear     512 x  13 x  13     255 x  13 x  13    7874159
    (17)  yolo            255 x  13 x  13     255 x  13 x  13    7874159
    (18)  route                  -            256 x  13 x  13    7874159
    (19)  conv-bn-leaky   256 x  13 x  13     128 x  13 x  13    7907439
    (20)  upsample        128 x  13 x  13     128 x  26 x  26        - 
    (21)  route                  -            384 x  26 x  26    7907439
    (22)  conv-bn-leaky   384 x  26 x  26     256 x  26 x  26    8793199
    (23)  conv-linear     256 x  26 x  26     255 x  26 x  26    8858734
    (24)  yolo            255 x  26 x  26     255 x  26 x  26    8858734
    Output blob names :
    yolo_17
    yolo_24
    Total number of layers: 50
    Total number of layers on DLA: 0
    Building the TensorRT Engine...
Building complete!
0:03:18.967494762  9609     0x20e568a0 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /home/alex/dev/Deep-Stream-ONNX/sources/objectDetector_Yolo/model_b8_fp16.engine
Deserialize yoloLayerV3 plugin: yolo_17
Deserialize yoloLayerV3 plugin: yolo_24

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF: FPS 0 (Avg)	
**PERF: 0.00 (0.00)	
** INFO: <bus_callback:189>: Pipeline ready

Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
** INFO: <bus_callback:175>: Pipeline running

Creating LL OSD context new
**PERF: 48.49 (48.49)	
**PERF: 53.51 (51.89)	
**PERF: 53.17 (52.41)	

It work with uri file.

Not work with rtsp stream.
I have a video but not not inferenced.

Thank you so much for your answer.
Best regard
JAba

Hi,
Several settings in rtsp_infer-resnet_tracker_fp16_nano.txt looks not right. It should be


[source0]
num-sources=1

[primary-gie]
model-engine-file=../../models/Primary_Detector_Nano/resnet10.caffemodel_b1_fp16.engine
batch-size=1

Please correct these and try again.

hi Dane

Thank you so much for your answer.

I correct my setting. I have a same result, not work.
deepstream-app -c rtsp_infer-resnet_tracker_fp16_nano.txt

Using winsys: x11 
Creating LL OSD context new
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF: FPS 0 (Avg)	
**PERF: 0.00 (0.00)	
** INFO: <bus_callback:189>: Pipeline ready

Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
** INFO: <bus_callback:175>: Pipeline running

Creating LL OSD context new
KLT Tracker Init
**PERF: 32.26 (32.26)	
**PERF: 30.01 (31.03)	

I specify that the camera offers a fisheye vision. I try too deewarp a stream with gstreamer-launch command.

gst-launch-1.0 rtspsrc location=rtsp://192.168.100.224/channel1 latency=200 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvideoconvert ! nvdewarper configfile=config_dewarper.txt source-id=6 ! m.sink_0 nvstreammux name=m width=960 height=752 batch-size=4 num-surfaces-per-frame=4 ! nvmultistreamtiler ! nvegltransform ! nveglglessink

I tried to move several parameters described in the documentation of deepstream.

I suspected too the model-color-format parameter of not being set at the right value because the rtsp streaming of the camera is black and white.

Jaba

its still not working and i don’t know why

its still not working and i don’t know why.

Hi,
Probably the result after dewarping is not good. Please try to get the video file:
gst-launch-1.0 rtspsrc location=rtsp://192.168.100.224/channel1 latency=200 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvideoconvert ! nvdewarper configfile=config_dewarper.txt source-id=6 ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! mpegtsmux ! filesink location=dewarp.ts

Replace it to deepstream_app_config_yoloV3_tiny.txt

uri=file:///home/nvidia/dewarp.ts

And run
$deepstream-app -c deepstream_app_config_yoloV3_tiny.txt

See if it is detectable.