Description
Getting this error while converting yolov3.onnx to yolov3.trt
Reading engine from file yolov3.trt
[TensorRT] ERROR: INVALID_ARGUMENT: Cannot deserialize with an empty memory buffer.
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File “onnx_to_tensorrt.py”, line 179, in
main(args.width, args.height, args.batch_size, args.dataset, args.int8, args.calib_file, args.onnx_file, args.engine_file,
File “onnx_to_tensorrt.py”, line 130, in main
with get_engine(onnx_file_path, width, height, batch_size, engine_file_path, int8mode, calib_file) as engine,
AttributeError: enter
Environment
TensorRT Version: 7.2.3-1
GPU Type:
Nvidia Driver Version: 470.57.02
CUDA Version: 11.4 remotely, 11.1 in docker container(tensorrt:21.04-py3)
CUDNN Version:
Operating System + Version: Debian GNU/Linux 10
Python Version (if applicable): 3.7.12
TensorFlow Version (if applicable): >=2.4.1
PyTorch Version (if applicable): >=1.7.0
Baremetal or Container (if container which image + tag): tensorrt:21.04-py3
Relevant Files
- GitHub - AlexeyAB/darknet: YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )
- GitHub - linghu8812/YOLOv3-TensorRT
Steps To Reproduce
- git clone GitHub - linghu8812/YOLOv3-TensorRT
- cd YOLOv3-TensorRT
- wget https://pjreddie.com/media/files/yolov3.weights
- docker pull nvcr.io/nvidia/tensorrt:21.04-py3
- docker run --gpus all -it --rm -v /YOLOv3-TensorRT/:/home nvcr.io/nvidia/tensorrt:21.04-py3
- cd /home/YOLOv3-TensorRT/
- pip3 install -r requirements.txt
- python3 yolov3_to_onnx.py --cfg_file yolov3.cfg --weights_file yolov3.weights --output_file yolov3.onnx
- python3 onnx_to_tensorrt.py --onnx_file yolov3.onnx --engine_file yolov3.trt