Export Pytorch's Yolo5 model to ONNX

Hello!

  • Do You have official script or guide for converting Pytorch’s model trained with Yolo v5 network into TensorRT’s usable ONNX format?

  • Does Pytorch version matter for conversion? I infere with TensorRT 8.0.1 on Jetson NANO (plz see below).

Environment

TensorRT Version: 8.0.1
GPU Type: Jetson Nano (Maxwell)
CUDA Version: 10.2
CUDNN Version: 8.2.1
Operating System + Version: Ubuntu 18.04 (Jetpack 4.6)

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Ok, I still haven’t got a ONNX model.
But second question was: Does Pytorch version matter for conversion? I infere with TensorRT 8.0.1 on Jetson NANO

Hi,

Hope the following doc may help you.
https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html

While converting please make sure you’re using the supported opset version.
For other prerequisites please refer to the following support matrix doc.

We also recommend you to use the latest TensorRT version to get better performance.

Thank you.