Custom YoloV3 inference on Deepstream 5.0 don´t work

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Xavier AGX
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.5
• TensorRT Version 7.1.3.0
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) ¿bug?
• How to reproduce the issue ?

Hello community

I have a problem displaying my YoloV3 custom model in Deepstream 5.0

My custom model has 9 classes and has been darknet trained and darknet tested works properly.

To build libnvdsinfer_custom_impl_Yolo I have first modified the nvdsparsebbox.cpp file leaving line number 33 as follows:
static const int NUM_CLASSES_YOLO = 9;

Once I have everything built I have modified the config_infer_primary_yoloV3.txt file by changing the key num-detected-classes = 9 in the group [property]

Once this modification has been made, I have modified the deepstream_app_config_yoloV3.txt file to change the path of the video that I want to test.

To launch the application I use the command: deepstream-app -c deepstream_app_config_yoloV3.txt

The problem I find is that it seems that the inference engine does not work since no bbox is generated. However with the default model it works fine, so it seems like it is something with my custom model.

Any idea?

Many thanks

Best regards

Hi.
Firstly, you can refer to https://docs.nvidia.com/metropolis/deepstream/4.0/Custom_YOLO_Model_in_the_DeepStream_YOLO_App.pdf to check any other you need to change.
Secondly, it’s INT8 inference config_infer_primary_yoloV3.txt, not sure if you generated you INT8 calibration table for your model, if you didn’t you can try changing to use FP16 or FP32 inference, i.e.
network-mode=2 // FP16
network-mode=0 // FP32

Hi
Thank you very much for the answer.

I have followed the manual you tell me, I have trained my model using the same configuration, only changing the number of classes

I have not generated the Int8 calibration file, I have used the one that was in the plugin folder. Should I generate it for my custom model? How should I do it?

I have tried changing the precision in
network-mode = 0,1 or 2 but I keep getting the same error

Many thanks.

Best regards!

if you are ok to only use fp16 or fp32, it’s not needed.

is it possible to share the datanet file you trained??

I have tried with INT8 and FP16 and it does not work, and when configuring it as FP32 it gives an error when booting

I attach the cfg file that I have used to train the model with darknet

Many thanks

yolov3.cfg (8.2 KB)

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

don’t see obvious clues, could you also share the weight so that I can repo?