Yolov5 + TensorRT results seems weird on Jetson Nano 4GB

Description

I’m trying to inference Yolov5 with TensorRT on Jetson Nano 4GB, However, the result is quite weird since using original ‘yolov5s.pt’, the inference speed is faster (~120ms) than when using ‘yolov5s.engine’ generated from the producer export.py (~140ms).

Environment

TensorRT Version : TensorRT 8.0.1
GPU Type : Jetson Nano GPU
CUDA Version : CUDA 10.2
CUDNN Version : CUDNN 8.2.1
Operating System + Version : Ubuntu 18.04
Python Version (if applicable) : Python 3.6.9
PyTorch Version (if applicable) : Pytorch v1.10.0

Relevant Files

Here is the link to my video (1920x1080): 1080p.mp4 - Google Drive

Steps To Reproduce

First, from the original yolov5s.pt , I use this command line to produce TensorRT yolov5s.engine file:

python3 export.py --weights yolov5s.pt --include engine --imgsz 640 640 --device 0

Then, I started inference with this .engine file:

python3 detect.py --weights yolov5s.engine --imgsz 640 640 --device 0 --source ./data/images/1080p.mp4

The result was about ~140ms inferencing on each frames. [link_image]

But when comparing to the original path file with:

python3 detect.py --weights yolov5s.pt --imgsz 640 640 --device 0 --source ./data/images/1080p.mp4

The result was about ~120ms inferencing on each frames [link_images]

Please help me !!!

Nevermind, I found a way to fix this problem, it was just about resolutions. Thank you btw.

Thanks for the quick report.
Good to know it works now.

I am also trying these but it not working, gives an error torchvision has no attributte ops
also tensorrt cant detect objects

have you ever face wtih that problem?

hi @muhammedsezer12, I haven’t met this error yet. But I really recommend you to reinstall torchvision that suitable with your torch version (Yolov5 requires Pytorch >= 3.6). I am using torch 1.10.0 and torchvision 0.11.0. Here is the link to the install pytorch on Jetson Nano if you need: PyTorch for Jetson - version 1.10 now available
Good luck.

Hi nguoi_dung98

Can you please tell me where export.py and detect.py is located? I am not able to find the directory path.

Thanks

1 Like

Hi, thanks it worked.
But I think I do somethings wrong.

Could you share our stats with me?
for example ssd mobilenet on jetson inference library from dusty_nv uses 1.4gb ram and gives 40FPS
but this yolov5.engine uses ~3gb ram and gives 0.086 inference time

I know it is more complex model but it seems wrong to me.

Also engine models from yolov5s and yolov5m uses same amount of ram which is ~3GB
But I set workspace to 1 for both of them

so could you share with me your consumed RAM and FPS for yolov5s.engine model?

@ART97 This is the yolov5 repo. GitHub - ultralytics/yolov5: YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite

I’m not so sure about ssd mobilenet performance cause I’ve never used it before, but the result for your engine file is acceptable to me at least. Your question is beyond my knowledge, I’ve just started in this field only a while ago, so maybe you should raised the questions and issues on the Yolov5 Github.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.