TensorRT C++ engine deserealization failed. Windows 10


Hello, I’m trying to deserialize .engine file in C++, using the following code:

bool didInitPlugins = initLibNvInferPlugins(nullptr, “”);
nvinfer1::ICudaEngine* engine = infer->deserializeCudaEngine(model_data, model_size, nullptr);

But then I get the following error in Windows console output:

Assertion failed: d == a + length

Engine file was created, using the same CUDA, TensorRT and GPU as on machine where I’m trying to deserialize it.


TensorRT Version:
GPU Type: Nvidia GeForce RTX 2070 Super
Nvidia Driver Version: 516.01
CUDA Version: 11.7
CUDNN Version: 8.4.0
Operating System + Version: Windows 10 version 21H2
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

7 zip archive with C++ Visual Studio 2019 project
(cuda11.7_ETmodel.zip - Google Drive)

Steps To Reproduce

Launch “cuda11.7_ETmodel.exe” in “cuda11.7_ETmodel\x64\Release” folder

Please refer to below links related custom plugin implementation and sample:

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.


1 Like

Can you please provide some example how to use “IPluginV2DynamicExt or IPluginV2IOExt interfaces” to desearilize .engine in C++ code?


Hope the following samples are helpful to you.

Thank you.