Trtexec model conversion crashed at insufficient gpu memory

All models have issues now. I followed the steps in this link:

Got the TensorRt 8.5.1 deb package from your website, and then run “dpkg -i xxx”, then apt update, then apt install.

Then I noticed the problem with trtexec:
[12/06/2022-23:46:22] [I] TensorRT version: 8.5.1
[12/06/2022-23:46:23] [W] [TRT] Unable to determine GPU memory usage
[12/06/2022-23:46:23] [W] [TRT] Unable to determine GPU memory usage
[12/06/2022-23:46:23] [I] [TRT] [MemUsageChange] Init CUDA: CPU +8, GPU +0, now: CPU 20, GPU 0 (MiB)
[12/06/2022-23:46:23] [W] [TRT] CUDA initialization failure with error: 222. Please check your CUDA installation: Installation Guide Linux :: CUDA

It suggested cuda toolkits. Then I tried to install cuda, still broken, so today I installed newer cudnn. All in vain.

What do you mean the packages in jetpack? I didn’t change jetpack this time. Are you suggesting I shouldn’t install TensorRt myself? Installing jetpack as a whole is safer?

BTW, we installed jetpack R35.1 in August, which has TensorRt8.4.1. But nsys-ui didn’t show any DLA activity, event with “nsys profile --accelerator-mode=nvmedia”. But your company link: Developer Guide :: NVIDIA Deep Learning TensorRT Documentation shows DLA activity. Since yours is TensorRt 8.5.1, I’m hoping upgrading TensorRt will help it, but regretfully all went backward .

What’s your latest jetpack version? Maybe I should install jetpack instead of (tensortr/cuda/cudnn separately)? Any need to upgrade graphic driver?

BTW, qll my attempts logged here:

Many thanks!