Custom YOLOv3 model in DeepStream 5.0

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) - Xavier
• DeepStream Version - 5.0
• JetPack Version (valid for Jetson only) - 4.4
• TensorRT Version - 7.0
• NVIDIA GPU Driver Version (valid for GPU only)

I have a custom YOLOv3 model that I was able to successfully load in DeepStream 4.0 using the sample application: objectdetector_Yolo, however when I try to replicate the same after upgrading to Jetpack 4.4 with Deepstream 5.0 things don’t seem to work. Objects are not getting detected and random bounding boxes show up occassionally. I modified the NUM_CLASSES_YOLO in nvdsparsebbox_Yolo.cpp to reflect the new classes in the custom model and the config files (infer_primary & deepstream app config) to point to the custom weights, yolov3-custom.cfg, custom-labels. I was able to successfully build the nvdsinfer_custom_impl_Yolo project and the deepstream-app was able to load the custom model and build the TRT engine. I find this to be strange and I’m trying to figure out what could be causing this.

Any ideas/thoughts will be helpful. Also is there an updated document of the:
https://docs.nvidia.com/metropolis/deepstream/Custom_YOLO_Model_in_the_DeepStream_YOLO_App.pdf ? This document is dated Aug 2019 and some things have changed with Deepstream 5.0. I wonder if I’m missing anything in my setup that can help explain the difference in eprformance betwen deepstream versions 4.0 & 5.0 ?

Thanks,
Dilip.

Hi @dilip.s,
Is it possible that this is the similar as Random Bounding Box in FasterRCNN etlt model in Xavier 30W Mode - #4 by cpchiu ?

Hello @mchi, thanks for the link. I will take a look and see if it applies to my issue.

-Dilip.

Hi @dilip.s
Could you help check if changing sink to mp4 file can still reproduce this issue?

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0

same error
Please provide complete information as applicable to your setup.
•Hardware Platform (Jetson / GPU) - Xavier NX
• DeepStream Version - 5.0**
• JetPack Version (valid for Jetson only) - 4.4
• TensorRT Version - 7.0**
• NVIDIA GPU Driver Version (valid for GPU only)

I use same yolov3.cfg in jetson nano deepstream 4.0 is all right
my config :
[property]
gpu-id=0
net-scale-factor=1
#0=RGB, 1=BGR
model-color-format=0
custom-network-config=yolov3.cfg
model-file=yolov3.weights
#model-engine-file=yolov3_b1_gpu0_int8.engine
labelfile-path=labels.txt
int8-calib-file=yolov3-calibration.table.trt7.0

0=FP32, 1=INT8, 2=FP16 mode

network-mode=0
num-detected-classes=80
gie-unique-id=1
network-type=0
is-classifier=0

0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)

cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
nms-iou-threshold=0.5
threshold=0.7

deepstream config:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=2
uri=file:///home/gjtjxnoone/projects/hat_data/20200618T061000Z_20200618T061500Z_20200628_140641.mp4
num-sources=1
gpu-id=0

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
container=1
#1=h264 2=h265
codec=1
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000

Set muxer output width and height

width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

config-file property is mandatory for any gie section.

Other properties are optional and if set will override the properties set in

the infer config file.

[primary-gie]
enable=1
gpu-id=0
#model-engine-file=model_b1_gpu0_int8.engine
labelfile-path=labels.txt
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=2
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV3.txt

[tracker]
enable=1
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so

[tests]
file-loop=0

the yolov3.cfg and weights is official, and netw,netH is 416,but in jetson xavier nx the rect is wrong

any help? Thanks
@mchi

Hi @gjtjx
Could you take a try this sink? Thanks!

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0

I have try the sink, object rect error is same,I have try yolov3-tiny,the result is right, my there is a bug, could you try yolov3 official cfg and weights?

sure, could you give me a hand to share the detailed repo steps?

I have downloaded yolov3.cfg and yolov3.weights,and set config_infer_primary_yoloV3.txt and deepstream_app_config_yoloV3.txt, but yolov3-tiny is all right

by below steps:

  1. download yolov3.cfg and yolov3.weights
  2. run
    $ deepstream-app -c deepstream_app_config_yoloV3.txt

so , this issue is reproduced? are you sure these repo steps?

yes

Hello @mchi,

I still have the issue when I tried setting the sink as you suggested. The bounding boxes are random even in the out.mp4 file.

I looked at the other link you suggested (Random Bounding Box in FasterRCNN etlt model in Xavier 30W Mode) and I don’t think my problem is the same. I am not using a etlt model, I just have a yolov3-custom.weights & cfg trained with Darknet. Also my issue is not related to batch size so far , as I’m trying to perform inference on 1 stream with batch size=1.

Also my issue is different from what @gjtjx reported. When I downloaded the yolov3 weights and config ran the sample video I was seeing correct bounding boxes. Once I made changes to NUM_CLASSES_YOLO and built the nvdsinfer_custom_impl_Yolo, I started seeing this random/bad bounding boxes.

I’m still trying to figure out the issue any thoughts and ideas will be helpful.

Thanks,
Dilip.

Tangentially, the nv python yolov3 implementation throws an exception if ALL_CATEGORIES is not equal 80.

Maybe if the category count changes it affects the output layer in the Yolo model. It definitely affects the memory allocated for the bounding box processing.

From the comments (non-dp version).
"Reshape a TensorRT output from NCHW to NHWC format and then return it in (height,width,3,85) dimensionality after further reshaping. output_reshaped – reshaped YOLO output as NumPy arrays with shape (height,width,3,85)

Just wanted to post an update on the issue. For some reason even the standard yolov3 trained on COCO stopped working for me. I verified that standard yolo & the custom yolo models were working correctly on a desktop machine with Deepstream 5.0 & TRT 7.0. So I just coped over the objectDetector_Yolo folder from deepstream samples on to my xavier into a standalone folder and built the TRT engines again. Now I’m seeing the correct bounding boxes on standard and custom yolov3 models. I tried to compare the files in nvdsinfer_custom_impl_Yolo but couldn’t find any meaningful differences on why 1 version works as expected and the other doesn’t. I’m moving on and hope I don’t encounter the same issue in the future.

-Dilip.

Hi @mchi,

I am tackling this issue on Nano Jetson custom : Running custom TinyYoloV3 that had worked in DS 4.0 I am getting random bounding boxes show up occasionally

Is there an answer about this issue ?

Hi zvikas,

Please try with DeepStream GA 5.0 with JetPack 4.4 GA to see if issue still present.
If yes, please help to open a new topic. Thanks