Description: I am using a Jetson Xavier NX with Jetpack 5.1.1, which comes with CUDA 11.4.19, TensorRT 8.5.2, and cuDNN 8.6.0 pre-installed. (Reference: Jetpack 5.1.1). However, I encountered an issue when trying to use the Python API to work with
.trt models, as I am unable to import the
To address this, I downloaded the TensorRT wheel file from the official NVIDIA website (TensorRT 8.x Download), but I noticed that it only provides ARM SBSA versions, and there were no specific versions listed for CUDA 11.4. I downloaded a version that was closest to CUDA 11.4, but I’m unsure if the ARM SBSA series of TensorRT is compatible with Jetson NX.
After downloading TensorRT-22.214.171.124.Ubuntu-20.04.aarch64-gnu.cuda-11.8.cudnn8.6.tar.gz, I successfully imported
tensorrt using the wheel file. However, I encountered another issue that the version intended for CUDA 11.8 couldn’t properly work with my current CUDA version on Jetson NX (11.4).
Currently, in my environment, Torch is able to detect CUDA and shows
True when I use
torch.cuda.is_available(). However, when I try to use TensorRT or ONNX Runtime to execute any CUDA-related task, it gives the error message “CUDA initialization failure with error 222. Please check your CUDA installation.”
I am unsure how to proceed from here. My objective is to compile TensorRT files using the Python API on Jetson NX. Should I reflash Jetpack? Are there any other potential solutions?
Thank you for your assistance!