I have created a virtual environment using python 3.8. But when I run yolov8 interference on Jetson using custom model, the inference is very slow, but showing 6-8ms. When i observed the gpu usage using jtop, it uses only some gpu. I also checked the torch cuda availability and it shows true…
Dear @mjt7913,
Is it like a PyTorch based implementation? If so, can you convert to TRT model for better performance.
TensorRT is already present in Jetson Nano but for python 3.6 but i’m using python 3.8… So do i have to build it for the same? and also I don’t know how to use TRT. If you know, could you briefly explain…Thank you…
Dear @mjt7913,
If you have ONNX model. You can use trtexec
tool in TensoRT to know the inference time of the model.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.