Yolov5 to tensorRT failure when using image size other than 640x640

Hello, I succesfully deployed a yolov5 custom trained model (pt file) on a jetson nano optimized with tensor rt with default imgsz 640 (engine file) . However, when I set imgsz differently (e.g. 320, 416, 256) i get “failed to export onnx file” Without any error indication. I also get an assertation error (torch.size( 1,3, 320,320),(1,3, 640,640) when i set imgsz other than 640 on detect.py using the engine file I have (imgsz 640) . These problems are not encountered on google colab. I have then trained a model with size 320 got the pt and then converted it to engine file successfully. However this is not my best solution. I prefer to have my first model and run inference with different size. So, do you happen to know if there are any incompatibility issues with jetson nano?
Ubuntu 18.04
python 3.6.9
OpenCV 4.1.1
Torch 1.8.0
Cuda 10.2
Pandas 1.1.5
Numpy 1.19.5
TensorRT 8.2.1.8
No Onnx module found (but this is also the case on colab)

Hi. Probably you’re getting “failed to export onnx file” because you need to run pip3 install onnx, if it doesn’t solve the problem, please share how you are converting your pt model. About the assertation error, I had the same problem on Jetson Nano, but I was doing inference through torch hub:

model = torch.hub.load(‘yolov5’, ‘custom’, path=‘path to your .engine model’, source=‘local’)
# passing img_size as an argument solved the problem for me
result = model(img, img_size)

You can find more about torch hub here

Thank you for the insight. I haven’t tried pytorch hub, maybe i will if nothing else works. I run
python3 export.py --data. /path to data.yaml/ --weights. /path to pt file/ --imgsz 288 --include engine --device 0
Same way for detect.py using --imgsz and --source accordingly.
I will try pip install onnx. Thanks again

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.