CUDNN_STATUS_EXECUTION_FAILED error on orin

when i convert onnx model to tensorrt engine on orin in docker.
it occured error as belows:

E0208 17:12:14.167093 1964448 tensorrt_logger.h:28] 1: [convolutionRunner.cpp::executeConv::508] Error Code 1: Cudnn (CUDNN_STATUS_EXECUTION_FAILED)
E0208 17:12:14.177323 1964448 tensorrt_logger.h:28] 2: [builder.cpp::buildSerializedNetwork::620] Error Code 2: Internal Error (Assertion engine != nullptr failed. )

it seems like error in buildSerializedNetwork fuction.

but when i do it outside docker, it finished successfully.
I check the linking libraries of cuda and tensorrt, which are the same.
my base images is nvcr.io/nvidia/l4t-base:r34.1.1

MY environment is :
tensorrt:8.4.0.11
cuda:11.4
cudnn:8…3.2.49
l4t:5.0.1

Hi,

Which platform do you use?
JetPack 5.x doesn’t support Nano.

Thanks.

orin. I have solved this problem.
The reason is cudnn’s version doesn’t match cuda inside docker.

Good to know it works now.
Thanks for the feedback.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.