Therefor I followed the steps in your yolov5 GPU optimization repo to convert the model into the ONNX format. With the pretrained YOLOv5 model it’s working great. I also can convert my custom trained model with the repo into ONNX. The conversion process for the engine file is also successful at the first run of the tao deemstream app but I get no detections. In the tao config file I only changed the number of classes according to my number of classes and the classfile. Do I have to do something else? Or is there a bug? I run the models on a Jetson Xavier NX and a Nano.
Can you provide a tutorial for custom models?
Do you also plan to add other new models of the YOLO series like YOLOr, YOLOx and YOLOv7?
The model is pretrained by ultralytics on the full COCO-Dataset so 80 classes. My model is only trained on 1 class (persons). I changed num-detected-classes=1 in the Tao config file and deleted all classes except persons in the text file with is referenced in the Tao config file (labelfile-path=yolov5_labels.txt). When I try to run the engine file from the exported Tao implementation in Marcos Luciano’s DeepStream-Yolo I also get the message: “Num classes mismatch. Configured: 1, detected by network: 0”. So I guess something goes wrong on the pt → onnx → engine conversion process with costum yolov5 models in Deepstream Tao Apps.