Therefor I followed the steps in your yolov5 GPU optimization repo to convert the model into the ONNX format. With the pretrained YOLOv5 model it’s working great. I also can convert my custom trained model with the repo into ONNX. The conversion process for the engine file is also successful at the first run of the tao deemstream app but I get no detections. In the tao config file I only changed the number of classes according to my number of classes and the classfile. Do I have to do something else? Or is there a bug? I run the models on a Jetson Xavier NX and a Nano.
Can you provide a tutorial for custom models?
Do you also plan to add other new models of the YOLO series like YOLOr, YOLOx and YOLOv7?
I don’t know what you mean. I trained it like described in Train Custom Data · ultralytics/yolov5 Wiki (github.com) on our dataset. I also trained another model on the COCO-Dataset where I removed all classes except persons from the label files. Both models also working with Marcos Luciano’s DeepStream-Yolo but we experienced a bug in this repo on the Jetson Xavier NX.
The model is pretrained by ultralytics on the full COCO-Dataset so 80 classes. My model is only trained on 1 class (persons). I changed num-detected-classes=1 in the Tao config file and deleted all classes except persons in the text file with is referenced in the Tao config file (labelfile-path=yolov5_labels.txt). When I try to run the engine file from the exported Tao implementation in Marcos Luciano’s DeepStream-Yolo I also get the message: “Num classes mismatch. Configured: 1, detected by network: 0”. So I guess something goes wrong on the pt → onnx → engine conversion process with costum yolov5 models in Deepstream Tao Apps.