[TRT]: Deserialize the cuda engine failed

Description

When running our Deepstream-5.0.1 application on a TX2 based device, using an usual engine having a following name
model_b2_gpu0_fp16.engine

it is happening that on some TX2 device everything works well while on other identical TX2 devices we get this error during loading of engine

ERROR: [TRT]: /home/jenkins/workspace/TensorRT/helpers/rel-7.1/L1_Nightly_Internal/build/source/rtSafe/resources.h (460) - Cuda Error in loadKernel: 3 (initialization error)
ERROR: [TRT]: INVALID_STATE: std::exception
ERROR: [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: Deserialize engine failed from file: /opt/crs/inferNetwork__Tiny_Yolo__HFP/model_b2_gpu0_fp16.engine

engine file was generated on a TX2 dev board or on the same TX2 device, results are in any case the same, … on some device it is loaded on others it is not loaded.

Question: which deepstream or sdk commands can we use to understand why engine is not loaded on some devices ?

thanks,
M.

Environment

TensorRT Version: 7.1.3
GPU Type: jetson TX2 256 core
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System + Version: Linux aarch64 - JetPack-2.5.0
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Hi @mgalimberti,

We request you to post your query on DeepStream forum. You may get better help here.

Thank you.

Hi,
It could be we found a tricky low level library mismatch in some devices, and solving mismatch, it seems that TRT engine loading now is working.

So I stop here, If we should have some more issues, we’ll post on your suggested DeepStream forum.

Thankyou for support,
M.

1 Like