nv@orin-1:/usr/src/TensorRT-8.5.1.7/bin$ ll
total 10908
drwxr-xr-x 4 nv nv 4096 Feb 21 11:54 ./
drwxr-xr-x 4 nv nv 4096 Oct 28 2022 ../
drwxrwxr-x 3 nv nv 4096 Feb 21 11:54 chobj/
drwxrwxr-x 3 nv nv 4096 Feb 21 11:54 dchobj/
-rwxrwxr-x 1 nv nv 2458312 Feb 21 11:54 sample_onnx_mnist*
-rwxrwxr-x 1 nv nv 8166256 Feb 21 11:54 sample_onnx_mnist_debug*
-rwxr-xr-x 1 nv nv 520864 Oct 28 2022 trtexec*
nv@orin-1:/usr/src/TensorRT-8.5.1.7/bin$ ./sample_onnx_mnist
&&&& RUNNING TensorRT.sample_onnx_mnist [TensorRT v8501] # ./sample_onnx_mnist
[02/21/2024-15:03:04] [I] Building and running a GPU inference engine for Onnx MNIST
[02/21/2024-15:03:04] [W] [TRT] Unable to determine GPU memory usage
[02/21/2024-15:03:04] [W] [TRT] Unable to determine GPU memory usage
[02/21/2024-15:03:04] [I] [TRT] [MemUsageChange] Init CUDA: CPU +9, GPU +0, now: CPU 20, GPU 0 (MiB)
[02/21/2024-15:03:04] [W] [TRT] CUDA initialization failure with error: 222. Please check your CUDA installation: CUDA Installation Guide for Linux — Installation Guide for Linux 13.1 documentation
&&&& FAILED TensorRT.sample_onnx_mnist [TensorRT v8501] # ./sample_onnx_mnist
nv@orin-1:/usr/src/TensorRT-8.5.1.7/bin$
nv@orin-1:/usr/src/TensorRT-8.5.1.7/bin$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:43:33_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
nv@orin-1:/usr/src/TensorRT-8.5.1.7/bin$ cat /usr/include/cudnn_version.h | grep CUDNN_MAJOR -A 2
define CUDNN_MAJOR 8
define CUDNN_MINOR 9
define CUDNN_PATCHLEVEL 7
define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)
/* cannot use constexpr here since this is a C-only file */
nv@orin-1:/usr/src/TensorRT-8.5.1.7/bin$
I have installed these software:
CUDA 11.8
CUDNN 8.9.7
TensorRT 8.5.1.7