I trained myself and converted two yolov5s models to .engine models. When I detect, I get very good results. But when I import with torch in the python file with my code, I get the following output.
import numpy as np
import cv2
import time
import torch
model = torch.hub.load(‘ultralytics/yolov5’, ‘custom’, path=‘enson.engine’,force_reload=True)
model1 = torch.hub.load(‘ultralytics/yolov5’, ‘custom’, path=‘char.engine’,force_reload=True)
cap = cv2.VideoCapture(0)
prev_frame_time = 0
new_frame_time = 0
while(cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
gray = frame
gray = cv2.resize(gray, (500, 300))
results = model(gray)
for _, det in enumerate(results.xyxy[0]):
# convert from tensor to numpy
box = det.detach().cpu().numpy()[:5]
# convert from float to integer
box = [int(x) for x in box]
x1, y1, x2, y2,name = box # crop the license plate image part
cropped = gray[y1:y2, x1:x2].copy()
label = "plate"
color = (0, 255, 255)
# draw a box on original image
cv2.rectangle(gray, (x1, y1), (x2, y2), (0, 255, 255), 2)
t_size = cv2.getTextSize(label, cv2.FONT_HERSHEY_PLAIN, 2, 2)[0]
img2 = cropped.copy().....
Hi, torch version=1.8.0 jetpack version= 4.6-b197
actually i am working with yolov5s model and i was not getting such error.but it was too slow.so i converted it.
About how many ms does inference take on the 4gb nano? With yolov5n.pt I get about 50 ms, and about 100ms with .engine… Do you get those results with detect.py from the repo?
yes, I get. python3 detect.py --data data/yourfile.yaml --weights yourfile.engine --source 0 .You can get it with the command. It takes 15 20 minutes to export in nano.
That long to export on Nano?!? For me it only took a few minutes, to generate a .engine file from .pt with this call python3 export.py --weight yolov5s.pt --optimize --include engine --device 0 . The annoying part is that inference is slower with engine than with pt