Custom YoloV3 inference on Deepstream 5.0 don´t work

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Xavier AGX
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.5
• TensorRT Version 7.1.3.0
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) ¿bug?
• How to reproduce the issue ?

Hello community

I have a problem displaying my YoloV3 custom model in Deepstream 5.0

My custom model has 9 classes and has been darknet trained and darknet tested works properly.

To build libnvdsinfer_custom_impl_Yolo I have first modified the nvdsparsebbox.cpp file leaving line number 33 as follows:
static const int NUM_CLASSES_YOLO = 9;

Once I have everything built I have modified the config_infer_primary_yoloV3.txt file by changing the key num-detected-classes = 9 in the group [property]

Once this modification has been made, I have modified the deepstream_app_config_yoloV3.txt file to change the path of the video that I want to test.

To launch the application I use the command: deepstream-app -c deepstream_app_config_yoloV3.txt

The problem I find is that it seems that the inference engine does not work since no bbox is generated. However with the default model it works fine, so it seems like it is something with my custom model.

Any idea?

Many thanks

Best regards