Error while converting yolov3 to trt engine

Description

Getting this error while converting yolov3.onnx to yolov3.trt

Reading engine from file yolov3.trt
[TensorRT] ERROR: INVALID_ARGUMENT: Cannot deserialize with an empty memory buffer.
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File “onnx_to_tensorrt.py”, line 179, in
main(args.width, args.height, args.batch_size, args.dataset, args.int8, args.calib_file, args.onnx_file, args.engine_file,
File “onnx_to_tensorrt.py”, line 130, in main
with get_engine(onnx_file_path, width, height, batch_size, engine_file_path, int8mode, calib_file) as engine,
AttributeError: enter

Environment

TensorRT Version: 7.2.3-1
GPU Type:
Nvidia Driver Version: 470.57.02
CUDA Version: 11.4 remotely, 11.1 in docker container(tensorrt:21.04-py3)
CUDNN Version:
Operating System + Version: Debian GNU/Linux 10
Python Version (if applicable): 3.7.12
TensorFlow Version (if applicable): >=2.4.1
PyTorch Version (if applicable): >=1.7.0
Baremetal or Container (if container which image + tag): tensorrt:21.04-py3

Relevant Files

  1. GitHub - AlexeyAB/darknet: YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )
  2. GitHub - linghu8812/YOLOv3-TensorRT

Steps To Reproduce

  1. git clone GitHub - linghu8812/YOLOv3-TensorRT
  2. cd YOLOv3-TensorRT
  3. wget https://pjreddie.com/media/files/yolov3.weights
  4. docker pull nvcr.io/nvidia/tensorrt:21.04-py3
  5. docker run --gpus all -it --rm -v /YOLOv3-TensorRT/:/home nvcr.io/nvidia/tensorrt:21.04-py3
  6. cd /home/YOLOv3-TensorRT/
  7. pip3 install -r requirements.txt
  8. python3 yolov3_to_onnx.py --cfg_file yolov3.cfg --weights_file yolov3.weights --output_file yolov3.onnx
  9. python3 onnx_to_tensorrt.py --onnx_file yolov3.onnx --engine_file yolov3.trt

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi,

You can also use the official TensorRT Yolo v3 sample to run successfully.

Thank you.