TensorRT Inference: Cuda initialization failure

Hi,

I am currently working on Yolo V5 TensorRT inferencing code. I have created a sample Yolo V5 custom model using TensorRT (7.1.3) C++ API. The custom model is working fine with NVIDIA RTX2060, RTX5000 and GTX1060. But when I try to port the same code in Jetson Xavier NX platform (Jetpack 4.5), I’m getting an error during the TensorRT engine creation time. Please find the below console log for the reference

libnvrm_gpu.so: NvRmGpuLibOpen failed
Creating TensorRT engine..
[01/31/2021-20:23:22] [E] [TRT] CUDA initialization failure with error 999. Please check your CUDA installation:  http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
Segmentation fault (core dumped)

The error showing my CUDA installation is not proper, from my understanding the CUDA 10.2 is also part of Jetpack 4.5, right? What is error code 999? Any additional steps required for TensorRT inferencing in Jetson platform? How do I resolve this runtime issue??

Any help would be greatly appreciated, thanks in advance.

Hi,

CUDA error 999 indicates an unknown error: CUDA Runtime API :: CUDA Toolkit Documentation

Here are two common causes for your reference:

1. Please noted that the TensorRT engine doesn’t support portability.
You cannot use the engine file serialized from another platform or TensorRT version.

2. Please check if you have added the Xavier(NX) GPU architecture in your code first.
Xavier’s GPU architecture is sm_72.

Thanks.

1 Like