I cannot use the file with .py extension trained in yolov5 in detect.py

I ran the yolo detect.py file on jetson orin nx. I gave the .pt file trained with yolov5 as input, but I get the following error. What is the reason and how can I solve it?

lagari@lagari:~/YOLOV5_2$ python detect.py
[sudo] password for lagari:
Command is written on terminal.
YOLOv5 🚀 v7.0-215-ga6659d0 Python-3.8.10 torch-2.0.0+nv23.05 CUDA:0 (Orin, 7337MiB)

Fusing layers…
Model summary: 476 layers, 87185236 parameters, 0 gradients
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3840 x 2160 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 22.250000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 22.250000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 1
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 59.999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[ WARN:0] global /home/ubuntu/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp (1100) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
1/1: 0… success (inf frames 1600x900 at 30.00 FPS)

Traceback (most recent call last):
File “detect.py”, line 443, in
detect_model.obj_detection()
File “detect.py”, line 209, in obj_detection
model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once
File “/home/lagari/.local/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1501, in _call_impl
return forward_call(*args, **kwargs)
File “/home/lagari/YOLOV5_2/models/yolo.py”, line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File “/home/lagari/YOLOV5_2/models/yolo.py”, line 121, in _forward_once
x = m(x) # run
File “/home/lagari/.local/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1501, in _call_impl
return forward_call(*args, **kwargs)
File “/home/lagari/YOLOV5_2/models/yolo.py”, line 65, in forward
self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i)
RuntimeError: The expanded size of the tensor (1) must match the existing size (80) at non-singleton dimension 3. Target sizes: [1, 2, 1, 1, 2]. Tensor sizes: [2, 80, 80, 2]
FATAL: exception not rethrown
Aborted (core dumped)

Hi,

Have you customized the app?
It looks like the image (1600x900) and the tensor size (1x3ximgszximgsz) don’t match.

Thanks.

thanks for the reply. Problem is solved.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.