Do You have official script or guide for converting Pytorch’s model trained with Yolo v5 network into TensorRT’s usable ONNX format?
Does Pytorch version matter for conversion? I infere with TensorRT 8.0.1 on Jetson NANO (plz see below).
TensorRT Version: 8.0.1
GPU Type: Jetson Nano (Maxwell)
CUDA Version: 10.2
CUDNN Version: 8.2.1
Operating System + Version: Ubuntu 18.04 (Jetpack 4.6)
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
- validating your model with the below snippet
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Ok, I still haven’t got a ONNX model.
But second question was: Does Pytorch version matter for conversion? I infere with TensorRT 8.0.1 on Jetson NANO
Hope the following doc may help you.
While converting please make sure you’re using the supported opset version.
For other prerequisites please refer to the following support matrix doc.
We also recommend you to use the latest TensorRT version to get better performance.