Description
I have my own pretrained pytorch model that I want to convert to a TensorRT model (.engine), I run this python script:
import torch
from torch2trt import torch2trt
pytorch_model = torch.load(“crowdhuman1600x_yolov5l.pt”)
x = torch.ones((1, 3, 224, 224)).cuda()
trt_model = torch2trt(pytorch_model, [x])
with open(“model.engine”, “wb”) as f:
f.write(trt_model.engine.serialize())
but everytime I run it, the following error occurs:
Traceback (most recent call last):
File “/home/nvidia/torch2trt/export.py”, line 9, in pytorch_model = torch.load(“crowdhuman1600x_yolov5l.pt”)*
*File “/home/nvidia/.local/lib/python3.6/site-packages/torch/serialization.py”, line 592, in load return_load(opened_zipfile, map_location, pickle_module, *pickle_load_args)
File “/home/nvidia/.local/lib/python3.6/site-packages/torch/serialization.py”, line 851, in _load
result = unpickler.load()
ModuleNotFoundError: No module named ‘models’
How can I solve it?
Environment
TensorRT Version: 8.2.1.9
GPU Type: Xavier
CUDA Version: 10.2
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.6.9
PyTorch Version (if applicable): 1.8.0