Hardware Platform: NVIDIA GPU (GeForce RTX 5080, running via WSL2 on PC)
DeepStream Version: 8.0.0
JetPack Version: N/A (running on PC, not Jetson)
TensorRT Version:
-
Initially detected: 10.9
-
After running DeepStream apps continuously or building models: changes to 10.3 (verified via
deepstream-app --version-all)
NVIDIA GPU Driver Version: 577.00
CUDA Driver Version: 12.9
CUDA Runtime Version: 12.8
cuDNN Version: 9.13
libNVWarp360 Version: 2.0.1d3
Issue Type: Bug
How to Reproduce the Issue:
-
Install DeepStream 8.0 Docker on WSL2.
-
Run the following command:
deepstream-app -c samples/configs/deepstream-app/source30_1080p_dec_infer-resnet_tiled_display.txt
-
Initially,
deepstream-app --version-allshows TensorRT 10.9. -
After running sample apps continuously or after building models,
deepstream-app --version-allunexpectedly shows TensorRT 10.3. -
Eventually, the following error occurs:
ERROR: [TRT]: createInferRuntime: Error Code 6: API Usage Error (CUDA initialization failure with error: 35 In checkCudaInstallation at runtime/dispatch/runtime.cpp:689)
ERROR: [TRT]: [checkMacros.cpp::catchCudaError::226] Error Code 1: Cuda Runtime (In catchCudaError at common/dispatch/checkMacros.cpp:226)
Segmentation fault (core dumped)
Additional Notes:
-
The TensorRT version downgrades automatically inside the container without any manual changes.
-
The error occurs intermittently — sometimes apps run fine, sometimes fail immediately.
-
Running on WSL2 with GPU passthrough (
nvidia-smiconfirms RTX 5080 with driver 577.00). -
Possible issue with version mismatch between container libs and host driver/runtime.
Expected Behavior:
-
TensorRT version should remain stable (10.9, as initially reported).
-
DeepStream apps should run consistently without CUDA/TensorRT initialization failures.