Error while converting yolov3 to trt engine


Getting this error while converting yolov3.onnx to yolov3.trt

Reading engine from file yolov3.trt
[TensorRT] ERROR: INVALID_ARGUMENT: Cannot deserialize with an empty memory buffer.
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File “”, line 179, in
main(args.width, args.height, args.batch_size, args.dataset, args.int8, args.calib_file, args.onnx_file, args.engine_file,
File “”, line 130, in main
with get_engine(onnx_file_path, width, height, batch_size, engine_file_path, int8mode, calib_file) as engine,
AttributeError: enter


TensorRT Version: 7.2.3-1
GPU Type:
Nvidia Driver Version: 470.57.02
CUDA Version: 11.4 remotely, 11.1 in docker container(tensorrt:21.04-py3)
CUDNN Version:
Operating System + Version: Debian GNU/Linux 10
Python Version (if applicable): 3.7.12
TensorFlow Version (if applicable): >=2.4.1
PyTorch Version (if applicable): >=1.7.0
Baremetal or Container (if container which image + tag): tensorrt:21.04-py3

Relevant Files

  1. GitHub - AlexeyAB/darknet: YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )
  2. GitHub - linghu8812/YOLOv3-TensorRT

Steps To Reproduce

  1. git clone GitHub - linghu8812/YOLOv3-TensorRT
  2. cd YOLOv3-TensorRT
  3. wget
  4. docker pull
  5. docker run --gpus all -it --rm -v /YOLOv3-TensorRT/:/home
  6. cd /home/YOLOv3-TensorRT/
  7. pip3 install -r requirements.txt
  8. python3 --cfg_file yolov3.cfg --weights_file yolov3.weights --output_file yolov3.onnx
  9. python3 --onnx_file yolov3.onnx --engine_file yolov3.trt

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging


You can also use the official TensorRT Yolo v3 sample to run successfully.

Thank you.