Randomly getting `createInferRuntime: Error Code 6: API Usage Error (CUDA Initialization failure with error 3)`

Description

I’m using TensorRT with an engine I created using the C++ API. When I try to load the model using nvinfer1::createInferRuntime(), I randomly get the error: createInferRuntime: Error Code 6: API Usage Error (CUDA initialization failure with error: 3). Since this is basically the first TensorRT method I call, I suspect it’s something related to my environment.

The only potentially relevant detail is that I’ve been porting my CMake files and dependencies from Linux to Windows, which required me to change how I enable CUDA. Initially I was using find_package(CUDAToolkit) and then linking with CUDA::cudart. Now, I simply only use enable_language(CUDA).

Does anyone have an idea of what might cause this error, or if there’s a way to get more information about what’s triggering it?

Environment

TensorRT Version: 10.11, 10.12
GPU Type: NVIDIA GeForce RTX 2080
Nvidia Driver Version: 575.64
CUDA Version: 12.9
cuDNN Version: 9.10.2.21-1
Operating System + Version: Linux arch2080x 6.15.2-arch1-1 #1 SMP PREEMPT_DYNAMIC Tue, 10 Jun 2025 21:32:33 +0000 x86_64 GNU/Linux

Steps To Reproduce

Call nvinfer1::createInferRuntime().