Deepstream 5.0 Inference Accuracy Lower than Native Darnket

Please provide complete information as applicable to your setup.

• Hardware Platform = GPU
• DeepStream Version = 5.0
• JetPack Version = N/A
• TensorRT Version = 7
• NVIDIA GPU Driver Version = 450.66

I have a custom yolov3-tiny model that works very well under native darknet (greater than 99.9% accuracy in test videos). However, when I run the same model and same video through Deepstream I get mediocre results.

Here’s the config file for that model:

[property]
model-file=/mnt/yolo_Files/yolov3-tiny-custom.weights
gpu-id=0
process-mode=2
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=/mnt/yolo_Files/yolov3-tiny-custom.cfg
labelfile-path=/mnt/yolo_Files/yolov3-tiny-custom.names
network-mode=0
num-detected-classes=36
gie-unique-id=4
is-classifier=0
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3TinyCustom
custom-lib-path=/opt/lib/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
cluster-mode=3
nms-iou-threshold=0.5
threshold=0.7

I’ve tested various settings in this config file (such as network-mode, aspect-ratio, cluster-mode, threshold, process-mode, etc) and nothing I do fixes the problem.

The NvDsInferParseCustomYoloV3TinyCustom function is the same as the regular NvDsInferParseCustomYoloV3Tiny function, only the number of classes is adjusted for my custom model.

did you check https://docs.nvidia.com/metropolis/deepstream/4.0/Custom_YOLO_Model_in_the_DeepStream_YOLO_App.pdf to see if it can help you ?

Thanks!

Yes, I’ve read that document before. We’ve been using Deepstream 4 in production with some of our other models for a little while now. This problem is with this a newer model that we built and with Deepstream 5.

Hi @cbstryker did you have any luck fixing your problem?

I’m running into similar issue - I have Darknet-trained Yolov3 model that was giving acceptable results under DS4.0. Now I am testing the same model with DS5.0 Triton docker container and the accuracy fell significantly.