Using the tensorrt model in python

I trained myself and converted two yolov5s models to .engine models. When I detect, I get very good results. But when I import with torch in the python file with my code, I get the following output.

import numpy as np
import cv2
import time
import torch
model = torch.hub.load(‘ultralytics/yolov5’, ‘custom’, path=‘enson.engine’,force_reload=True)
model1 = torch.hub.load(‘ultralytics/yolov5’, ‘custom’, path=‘char.engine’,force_reload=True)
cap = cv2.VideoCapture(0)
prev_frame_time = 0
new_frame_time = 0
while(cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
gray = frame
gray = cv2.resize(gray, (500, 300))

results = model(gray)
for _, det in enumerate(results.xyxy[0]):
	# convert from tensor to numpy
	box = det.detach().cpu().numpy()[:5]
	# convert from float to integer
	box = [int(x) for x in box]
	x1, y1, x2, y2,name = box  # crop the license plate image part
	cropped = gray[y1:y2, x1:x2].copy()
	label = "plate"
	color = (0, 255, 255)
	# draw a box on original image
	cv2.rectangle(gray, (x1, y1), (x2, y2), (0, 255, 255), 2)
	t_size = cv2.getTextSize(label, cv2.FONT_HERSHEY_PLAIN, 2, 2)[0]

	img2 = cropped.copy().....

Hi,

Could you share the setup of your Nano?
Which JetPack and PyTorch package do you use?

More, would you mind trying other models to see if the same issue occurs?
Thanks.

Hi,
torch version=1.8.0
jetpack version= 4.6-b197
actually i am working with yolov5s model and i was not getting such error.but it was too slow.so i converted it.

can you please help?

Do you actually get improved results with .engine compared to .pt model?
Have you tried with onnx?

I’m definitely getting better results. onnx has the same speed as pt.

About how many ms does inference take on the 4gb nano? With yolov5n.pt I get about 50 ms, and about 100ms with .engine… Do you get those results with detect.py from the repo?

yes, I get. python3 detect.py --data data/yourfile.yaml --weights yourfile.engine --source 0 .You can get it with the command. It takes 15 20 minutes to export in nano.

That long to export on Nano?!? For me it only took a few minutes, to generate a .engine file from .pt with this call python3 export.py --weight yolov5s.pt --optimize --include engine --device 0 . The annoying part is that inference is slower with engine than with pt

Hi,

Based on the following code:

model = torch.hub.load(‘ultralytics/yolov5’, ‘custom’, path=‘enson.engine’,force_reload=True)

Do you try to load a TensorRT engine with PyTorch API?
If yes, have you tried it before on other platforms?

In general, we load an engine with TensorRT API directly.
You can find an example in the below link:
https://elinux.org/Jetson/L4T/TRT_Customized_Example#OpenCV_with_PLAN_model

Thanks.

Hi,

In my case I’m trying with TensorRT API. tensorrtx/yolov5_trt.py at master · wang-xinyu/tensorrtx (github.com)

@nebiyebln
hi,
Could you show all of the detect.py file?

Uploading: detect.py…
I am using the default detect.py file.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.