Please provide complete information as applicable to your setup.
• Hardware Platform = GPU
• DeepStream Version = 5.0
• JetPack Version = N/A
• TensorRT Version = 7
• NVIDIA GPU Driver Version = 450.66
I have a custom yolov3-tiny model that works very well under native darknet (greater than 99.9% accuracy in test videos). However, when I run the same model and same video through Deepstream I get mediocre results.
Here’s the config file for that model:
[property]
model-file=/mnt/yolo_Files/yolov3-tiny-custom.weights
gpu-id=0
process-mode=2
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=/mnt/yolo_Files/yolov3-tiny-custom.cfg
labelfile-path=/mnt/yolo_Files/yolov3-tiny-custom.names
network-mode=0
num-detected-classes=36
gie-unique-id=4
is-classifier=0
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3TinyCustom
custom-lib-path=/opt/lib/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
cluster-mode=3
nms-iou-threshold=0.5
threshold=0.7
I’ve tested various settings in this config file (such as network-mode, aspect-ratio, cluster-mode, threshold, process-mode, etc) and nothing I do fixes the problem.
The NvDsInferParseCustomYoloV3TinyCustom function is the same as the regular NvDsInferParseCustomYoloV3Tiny function, only the number of classes is adjusted for my custom model.