Deepstream trained yolo3-tiny accuracy

• Hardware Platform (Jetson / GPU) / Jetson AGX XAVIER
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version 7.0

Hello everyone,

I’m trying to use my trained yolo3-tiny on deepstream., and I’m facing some problems regarding the accuracy of the model. The original model is keras which is trained to detect one class and I converted to .weights using my .cfg file to run it on deepstream. I test the conversion step on a normal python code (not deepstream) and it performs as the original keras model. However, when I run it on deepstream the accuracy changed, the object boxes is incorrect and it gathers more than one object in the same box!

The changes I have made to nvdsparsebbox_Yolo.cpp :

static const int NUM_CLASSES_YOLO = 1;

config_infer_primary_yoloV3_tiny.txt:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#0=RGB, 1=BGR
model-color-format=0
custom-network-config=yolov3-tiny_RL.cfg
model-file=yolov3-tiny_RL.weights
#model-engine-file=yolov3-tiny_b1_gpu0_fp32.engine
labelfile-path=labels.txt
#0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=1
gie-unique-id=1
network-type=0
is-classifier=0
#0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3Tiny
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
#scaling-filter=0
#scaling-compute-hw=0

My application configuration file:

[property]
gpu-id=0
interval=2
gie-unique-id=1
batch-size=1
nvbuf-memory-type=0
net-scale-factor=0.0039215697906911373
#0=RGB, 1=BGR
model-color-format=0
config-file=config_infer_primary_yoloV3_tiny.txt
custom-network-config=yolov3-tiny_RL.cfg
model-file=yolov3-tiny_RL.weights
#model-engine-file=model_b1_gpu0_int8.engine
labelfile-path=labels.txt
int8-calib-file=yolov3-calibration.table.trt7.0
network-mode=1
num-detected-classes=1
is-classifier=0
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3Tiny
engine-create-func-name=NvDsInferYoloCudaEngineGet
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
cluster-mode=2
process-mode=1

Model config file:

[net]
#Testing
batch=1
subdivisions=1
#Training
#batch=64
#subdivisions=2
width=416
height=416
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1

learning_rate=0.001
burn_in=1000
max_batches = 4000
policy=steps
steps=3200,3600
scales=.1,.1

[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

Thank you,

please check if num-detected-classes setting is right for your case

As you can see in the configuration the num-detected-classes is set to 1 which is the number of classes in my case. However, I manage to solve the problem by adding

[class-attrs-all]
nms-iou-threshold=0.45
threshold=0.5

to both model and app config files. which is equivalent to my original model settings.

Glad to konw that, thanks for update!

I compared the output bounding boxes from .h5 model and converted model (deepstream output), we found that some boxes lost from converted model. What might be the problem?