[hardwareContext.cpp::configure::92] Error Code 1: Cudnn (CUDNN_STATUS_MAPPING_ERROR)

Description

I am runing an inferance using the API: virtual bool execute(int batchSize, void** bindings) noexcept = 0;
On few of our systems we are getting an exception: [hardwareContext.cpp::configure::92] Error Code 1: Cudnn (CUDNN_STATUS_MAPPING_ERROR).
The TRT model was genearted on other equal HW.

I would love to understand the area of the exception to look for the cause of it.
Can we assume that it is a missmatch between HW and the other HW where it was generated?

Environment

JetPack4.6
XavierJCB
Ubuntu 18.04.5 LTS (GNU/Linux 4.9.253-tegra aarch64)
libcudnn_cnn_infer.so.8.2.1
libnvinfer.so.8.0.1
tegra/libcuda.so.1.1
GPU@59C

Hi @anya.katz ,
Since TRT is hardware dependent, you should generate the engine on Xavier directly.

Thanks

Thank you for the answare. My main question is if the exception I have received can be explained by a minore differance in hardware?

Could you please generate the TRT engine on Xavier directly and try inference again and let us know if you still face this issue.

The issue does not reproduce easily. Once every few weeks.
Currently, TRT engine that generated on Xavier directly did not fail.

Thank you for the confirmation.

Please share with us the issue repro model and verbose logs if you face this issue.

1 Like