Description
I trained a Yolov4 model with TAO toolkit and exported it to .etlt model.
I wanted to run inference with TensorRT Python API.
- I’m using tensorrt container verision 22.01.
- Installed TensorRT OSS using the script /opt/tensorrt/install_opensource.sh within the container.
- Downloaded the tao-converter version 3.22.05_trt8.2_x86 and converted the yolov4 etlt model to engine with below command:
./tao-converter ../models/ppe_v21_yolov4_mobilenetv2_epoch_070.etlt -k nvidia_tlt -d 1,3,160,320 -t fp16 -e ../engines/ppe_yolov4_mobilenetv2_epoch70_fp16.engine -p Input,1x3x320x160,8x3x320x160,16x3x320x160 -o BatchedNMS
Able to generate engine with the above command. But I’m unable to deserialize the engine when I try to do inference with Python API. Attached the code to run inference on the generated engine.
trt_loader.py (9.3 KB)
Environment
TensorRT Version: 8.2.2
GPU Type: RTX 3070
Nvidia Driver Version: 530.30.02
CUDA Version: 11.6
CUDNN Version: 8.3.2
Operating System + Version: Ubuntu 20.04
Python Version (if applicable): 3.8
TensorFlow Version (if applicable): NA
PyTorch Version (if applicable): NA
Baremetal or Container (if container which image + tag): Container version 22.01
Steps To Reproduce
Please include:
- Full traceback of errors encountered
[04/13/2023-13:01:08] [TRT] [I] [MemUsageChange] Init CUDA: CPU +750, GPU +0, now: CPU 785, GPU 842 (MiB)
[04/13/2023-13:01:08] [TRT] [I] Loaded engine size: 4 MiB
[04/13/2023-13:01:08] [TRT] [E] 1: [pluginV2Runner.cpp::load::290] Error Code 1: Serialization (Serialization assertion creator failed.Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
[04/13/2023-13:01:08] [TRT] [E] 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
Traceback (most recent call last):
Kindly help me fix this issue.