Hi,
I am currently working on Yolo V5 TensorRT inferencing code. I have created a sample Yolo V5 custom model using TensorRT (7.1.3) C++ API. The custom model is working fine with NVIDIA RTX2060, RTX5000 and GTX1060. But when I try to port the same code in Jetson Xavier NX platform (Jetpack 4.5), I’m getting an error during the TensorRT engine creation time. Please find the below console log for the reference
libnvrm_gpu.so: NvRmGpuLibOpen failed
Creating TensorRT engine..
[01/31/2021-20:23:22] [E] [TRT] CUDA initialization failure with error 999. Please check your CUDA installation: http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
Segmentation fault (core dumped)
The error showing my CUDA installation is not proper, from my understanding the CUDA 10.2 is also part of Jetpack 4.5, right? What is error code 999? Any additional steps required for TensorRT inferencing in Jetson platform? How do I resolve this runtime issue??
Any help would be greatly appreciated, thanks in advance.