DeepStream 8.0 on Docker WSL – Segmentation Fault, CUDA Init Failure, and TensorRT Version Downgrade (10.9 → 10.3)

Hardware Platform: NVIDIA GPU (GeForce RTX 5080, running via WSL2 on PC)

DeepStream Version: 8.0.0

JetPack Version: N/A (running on PC, not Jetson)

TensorRT Version:

  • Initially detected: 10.9

  • After running DeepStream apps continuously or building models: changes to 10.3 (verified via deepstream-app --version-all)

NVIDIA GPU Driver Version: 577.00

CUDA Driver Version: 12.9

CUDA Runtime Version: 12.8

cuDNN Version: 9.13

libNVWarp360 Version: 2.0.1d3

Issue Type: Bug

How to Reproduce the Issue:

  1. Install DeepStream 8.0 Docker on WSL2.

  2. Run the following command:

deepstream-app -c samples/configs/deepstream-app/source30_1080p_dec_infer-resnet_tiled_display.txt

  1. Initially, deepstream-app --version-all shows TensorRT 10.9.

  2. After running sample apps continuously or after building models, deepstream-app --version-all unexpectedly shows TensorRT 10.3.

  3. Eventually, the following error occurs:

ERROR: [TRT]: createInferRuntime: Error Code 6: API Usage Error (CUDA initialization failure with error: 35 In checkCudaInstallation at runtime/dispatch/runtime.cpp:689)
ERROR: [TRT]: [checkMacros.cpp::catchCudaError::226] Error Code 1: Cuda Runtime (In catchCudaError at common/dispatch/checkMacros.cpp:226)
Segmentation fault (core dumped)

Additional Notes:

  • The TensorRT version downgrades automatically inside the container without any manual changes.

  • The error occurs intermittently — sometimes apps run fine, sometimes fail immediately.

  • Running on WSL2 with GPU passthrough (nvidia-smi confirms RTX 5080 with driver 577.00).

  • Possible issue with version mismatch between container libs and host driver/runtime.

Expected Behavior:

  • TensorRT version should remain stable (10.9, as initially reported).

  • DeepStream apps should run consistently without CUDA/TensorRT initialization failures.

Additional Observations / Continuation:

  • After creating a new DeepStream 8.0 Docker container and running apps for several hours or after building/loading a .pt model to ONNX, the custom DeepStream app fails with:
 Unable to set device in gst_nvstreammux_change_state
** ERROR: <main:802>: Failed to set pipeline to PAUSED
Quitting

Running deepstream-app --version-all at this stage shows:

deepstream-app version 8.0.0
DeepStreamSDK 8.0.0
CUDA Driver Version: 0.0
CUDA Runtime Version: 12.8
TensorRT Version: 10.9
cuDNN Version: 9.8
libNVWarp360 Version: 2.0.1d3

  • This indicates that the CUDA driver inside the container becomes unavailable after long runtime or model operations.

  • The loss of CUDA driver may be related to WSL2 GPU passthrough instability, DeepStream/TensorRT initialization, or container runtime handling of /dev/nvidia* devices.

  • This explains the intermittent TensorRT version mismatch (10.9 → 10.3) observed earlier.

Impact:

  • DeepStream apps fail to initialize pipelines.

  • TensorRT runtime becomes unstable.

  • Segmentation faults and CUDA initialization errors occur.

After starting the container, do you just run deepstream-app? Are there any other operations?

Is your installation environment exactly the same as that in the documentation?

The installation is correct and follows the documentation. Initially everything runs fine, but after keeping the container running for a few hours, or when I include a custom model (converted from .pt → ONNX → engine), the reported versions change unexpectedly (for example, TensorRT or CUDA Driver showing 0.0), and then the errors appear.

Thanks for your feedback, I’ll try to reproduce it

I set file-loop to true in the configuration file and ran it in docker for several hours without any similar problems.

Thanks for reply,Now i have no issues in docker

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.