TAO Deepstream YOLOv5 Custom Models

Hi,

I tried your Tao Deepstream Implemention for YOLOv5.

Therefor I followed the steps in your yolov5 GPU optimization repo to convert the model into the ONNX format. With the pretrained YOLOv5 model it’s working great. I also can convert my custom trained model with the repo into ONNX. The conversion process for the engine file is also successful at the first run of the tao deemstream app but I get no detections. In the tao config file I only changed the number of classes according to my number of classes and the classfile. Do I have to do something else? Or is there a bug? I run the models on a Jetson Xavier NX and a Nano.

Can you provide a tutorial for custom models?

Do you also plan to add other new models of the YOLO series like YOLOr, YOLOx and YOLOv7?

The repo(GitHub - NVIDIA-AI-IOT/yolov5_gpu_optimization: This repository provides YOLOV5 GPU optimization sample) is downloading https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt and then convert to ONNX format model.

May I know how did you train your “custom trained model”?

In current TAO, the yolov5 is not available yet.

I also trained the model with YOLOv5 v6.1 but not with the same commit could this be the problem? Afterwards I export it in the exact same manner.

But it’s available in the Deepstream Tao Apps

May I know how did you train ?

I don’t know what you mean. I trained it like described in Train Custom Data · ultralytics/yolov5 Wiki (github.com) on our dataset. I also trained another model on the COCO-Dataset where I removed all classes except persons from the label files. Both models also working with Marcos Luciano’s DeepStream-Yolo but we experienced a bug in this repo on the Jetson Xavier NX.

OK, I got it. You trained it according to Train Custom Data · ultralytics/yolov5 Wiki (github.com).

Thus, it is not a topic for TAO.

In summary,

repo(GitHub - NVIDIA-AI-IOT/yolov5_gpu_optimization: This repository provides YOLOV5 GPU optimization sample) + public https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt : can export to onnx well and also get detections

repo(GitHub - NVIDIA-AI-IOT/yolov5_gpu_optimization: This repository provides YOLOV5 GPU optimization sample) + your trained model : can export to onnx well but get no detections

Could you compare how https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt is trained and the steps how your model is trained ? To check the difference and then check the config in Tao Deepstream Implemention.

Yes, exactly.

The model is pretrained by ultralytics on the full COCO-Dataset so 80 classes. My model is only trained on 1 class (persons). I changed num-detected-classes=1 in the Tao config file and deleted all classes except persons in the text file with is referenced in the Tao config file (labelfile-path=yolov5_labels.txt). When I try to run the engine file from the exported Tao implementation in Marcos Luciano’s DeepStream-Yolo I also get the message: “Num classes mismatch. Configured: 1, detected by network: 0”. So I guess something goes wrong on the pt → onnx → engine conversion process with costum yolov5 models in Deepstream Tao Apps.

To narrow down, can you try other public yolov5 pytorch model?

Sadly I didn’t find any other models except the pretrained one by ultralytics. Can you help me out?

You can find the “Assets” at the bottom of Release v6.1 - TensorRT, TensorFlow Edge TPU and OpenVINO Export and Inference · ultralytics/yolov5 · GitHub.

After checking, please refer to below topic.

https://github.com/NVIDIA-AI-IOT/yolov5_gpu_optimization/issues/1

The solution is to set correct classes. For your case, change 80 to 1.

https://github.com/NVIDIA-AI-IOT/yolov5_gpu_optimization/blob/main/0001-Enable-onnx-export-with-batchNMS-plugin.patch#L155
1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.