I’m trying to inference Yolov5 with TensorRT on Jetson Nano 4GB, However, the result is quite weird since using original ‘yolov5s.pt’, the inference speed is faster (~120ms) than when using ‘yolov5s.engine’ generated from the producer export.py (~140ms).
Environment
TensorRT Version : TensorRT 8.0.1 GPU Type : Jetson Nano GPU CUDA Version : CUDA 10.2 CUDNN Version : CUDNN 8.2.1 Operating System + Version : Ubuntu 18.04 Python Version (if applicable) : Python 3.6.9 PyTorch Version (if applicable) : Pytorch v1.10.0
hi @muhammedsezer12, I haven’t met this error yet. But I really recommend you to reinstall torchvision that suitable with your torch version (Yolov5 requires Pytorch >= 3.6). I am using torch 1.10.0 and torchvision 0.11.0. Here is the link to the install pytorch on Jetson Nano if you need: PyTorch for Jetson
Good luck.
Hi, thanks it worked.
But I think I do somethings wrong.
Could you share our stats with me?
for example ssd mobilenet on jetson inference library from dusty_nv uses 1.4gb ram and gives 40FPS
but this yolov5.engine uses ~3gb ram and gives 0.086 inference time
I know it is more complex model but it seems wrong to me.
Also engine models from yolov5s and yolov5m uses same amount of ram which is ~3GB
But I set workspace to 1 for both of them
so could you share with me your consumed RAM and FPS for yolov5s.engine model?
I’m not so sure about ssd mobilenet performance cause I’ve never used it before, but the result for your engine file is acceptable to me at least. Your question is beyond my knowledge, I’ve just started in this field only a while ago, so maybe you should raised the questions and issues on the Yolov5 Github.