Orin32, after upgrade TensorRT fron 8.4.1 to 8.5.1, trtexec no longer works

Hi, there:

I upgraded my Orin32 box from 8.4.1 TensorRT to 8.5.1, somehow trtexec no longer works. I followed the steps in your website for the TensorRT:

After install, trtexec can’t determine GPU memory use. So I decided to follow the cuda install instruction in above link. However, the problem is still here.

I rebooted the box and it has the same issue.
It has the same problem for many models which works at earlier TensortRt8.4.1

Thanks!

Here is the log file when running TrtExec command for mobilenet model:
mobilenet.log (8.1 KB)

Key error below:
[12/06/2022-23:46:22] [I] TensorRT version: 8.5.1
[12/06/2022-23:46:23] [W] [TRT] Unable to determine GPU memory usage
[12/06/2022-23:46:23] [W] [TRT] Unable to determine GPU memory usage
[12/06/2022-23:46:23] [I] [TRT] [MemUsageChange] Init CUDA: CPU +8, GPU +0, now: CPU 20, GPU 0 (MiB)
[12/06/2022-23:46:23] [W] [TRT] CUDA initialization failure with error: 222. Please check your CUDA installation: Installation Guide Linux :: CUDA Toolkit Documentation
[12/06/2022-23:46:23] [E] Builder creation failed
[12/06/2022-23:46:23] [E] Failed to create engine from model or file.
[12/06/2022-23:46:23] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8501] # /usr/src/tensorrt/bin/trtexec --onnx=mobilenetv2_224x224.onnx --saveEngine=m.engine --allowGPUFallback

=== my machine =========
dpkg -l |grep tensor
ii nv-tensorrt-local-repo-ubuntu2004-8.5.1-cuda-11.8 1.0-1 arm64 nv-tensorrt-local repository configuration files
ii tensorrt 8.5.1.7-1+cuda11.8 arm64 Meta package for TensorRT
hi tensorrt-dev 8.5.1.7-1+cuda11.8 arm64 Meta package for TensorRT development libraries
ii tensorrt-libs 8.5.1.7-1+cuda11.8 arm64 Meta package for TensorRT runtime libraries
inceptio@orin32:~/helen/models/cnn/mobilenet_v2$ dpkg -l |grep -i tensor
ii graphsurgeon-tf 8.5.1-1+cuda11.8 arm64 GraphSurgeon for TensorRT package
ii libnvinfer-bin 8.5.1-1+cuda11.8 arm64 TensorRT binaries
ii libnvinfer-dev 8.5.1-1+cuda11.8 arm64 TensorRT development libraries and headers
ii libnvinfer-plugin-dev 8.5.1-1+cuda11.8 arm64 TensorRT plugin libraries
ii libnvinfer-plugin8 8.5.1-1+cuda11.8 arm64 TensorRT plugin libraries
ii libnvinfer-samples 8.5.1-1+cuda11.8 all TensorRT samples
ii libnvinfer8 8.5.1-1+cuda11.8 arm64 TensorRT runtime libraries
ii libnvonnxparsers-dev 8.5.1-1+cuda11.8 arm64 TensorRT ONNX libraries
ii libnvonnxparsers8 8.5.1-1+cuda11.8 arm64 TensorRT ONNX libraries
ii libnvparsers-dev 8.5.1-1+cuda11.8 arm64 TensorRT parsers libraries
ii libnvparsers8 8.5.1-1+cuda11.8 arm64 TensorRT parsers libraries
ii nv-tensorrt-local-repo-ubuntu2004-8.5.1-cuda-11.8 1.0-1 arm64 nv-tensorrt-local repository configuration files
ii onnx-graphsurgeon 8.5.1-1+cuda11.8 arm64 ONNX GraphSurgeon for TensorRT package
ii python3-libnvinfer 8.5.1-1+cuda11.8 arm64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 8.5.1-1+cuda11.8 arm64 Python 3 development package for TensorRT
ii tensorrt 8.5.1.7-1+cuda11.8 arm64 Meta package for TensorRT
hi tensorrt-dev 8.5.1.7-1+cuda11.8 arm64 Meta package for TensorRT development libraries
ii tensorrt-libs 8.5.1.7-1+cuda11.8 arm64 Meta package for TensorRT runtime libraries
ii uff-converter-tf 8.5.1-1+cuda11.8 arm64 UFF converter for TensorRT package

==== $ tegrastats
12-07-2022 00:03:27 RAM 1164/30536MB (lfb 6994x4MB) SWAP 0/15268MB (cached 0MB) CPU [1%@729,0%@729,0%@729,0%@1190,0%@729,0%@729,0%@729,0%@729,0%@729,0%@729,0%@729,0%@729] EMC_FREQ 0% GR3D_FREQ 0% CV0@-256C CPU@44.812C Tboard@33C SOC2@40.937C Tdiode@35.25C SOC0@41.812C CV1@-256C GPU@-256C tj@45.156C SOC1@40.687C CV2@-256C

=== I did “dpkg -l |grep cuda”, and here is the log file:
cuda.log (12.2 KB)

=== my enviroment variables:
export.log (3.0 KB)

==== I also updated libcudnn to 8.7.0.48, but the same problem still here.

=== Seems I need to update graphic driver also. What the choice for orin32 in this page? Official Drivers | NVIDIA
Thanks. Is it really required?

=== I also modified the environment vars as below, still no use.
2004 export LD_LIBRARY_PATH=“/usr/lib/llvm-10/lib::/usr/local/cuda-11.8/lib64”
2007 export CMAKE_CUDA_COMPILER=“/usr/local/cuda-11.8/bin/gcc”
2008 export CUDACXX=“/usr/local/cuda-11.8/bin/nvcc”
2009 export CUDA_BIN_PATH=“/usr/local/cuda-11.8/bin”
2010 export CUDA_TOOLKIT_ROOT_DIR=“/usr/local/cuda-11.8”

Never mind. Decided to remove all the new packages and go back to jetpack 5.02.