Hello NVIDIA Devs,
I am trying to run DeepStream 8.0 inside a Docker container on WSL2 Ubuntu 24.04 with my RTX 4060 GPU. I can successfully load the TensorRT model, and the GPU is visible (nvidia-smi works) inside WSL2 and inside the container, but hardware decoding (NVDEC/NVMM) fails with the following error:
*** Inside cb_newpad name=video/x-raw
Opening in BLOCKING MODE
Error while setting IOCTL
Invalid control
S_EXT_CTRLS for CUDA_GPU_ID failed
Resetting source -1, attempts: 1
Here are my setup details:
-
Windows 11 Build: 10.0.26100.7171
-
WSL2 Version: 2.6.2.0
-
WSL2 Kernel Version: 6.6.87.2-1
-
WSLg Version: 1.0.71
-
Direct3D Version: 1.611.1-81528511
-
GPU: NVIDIA GeForce RTX 4060
-
NVIDIA Driver on Windows: 576.52 (CUDA 12.9) – Clean installation, Game Ready
-
Docker Version: 29.1.2, build 890dcca
-
DeepStream 8.0 container:
nvcr.io/nvidia/deepstream:8.0-gc-triton-devel -
TensorRT: working fine (model engine loads successfully)
-
Issue: NVDEC hardware decoding fails; frames are not decoded, and the pipeline cannot start.
-
GPU visibility:
nvidia-smiworks correctly both in WSL2 and inside the container -
GStreamer plugin:
nvv4l2decoderexists (gst-inspect-1.0 | grep nvv4l2decoderconfirms it is installed)
Additional Info:
-
NVDEC/NVMM plugins (
nvv4l2decoder) appear not fully supported in WSL2, even though the GPU is visible and TensorRT works. -
On a native Ubuntu 24.04 machine, the exact same container works correctly with hardware decoding.
-
Linux NVIDIA drivers are not installed in WSL2, only the Windows driver, as recommended.
Question:
Is there a known workaround or solution to enable hardware decoding with DeepStream 8.0 on WSL2, or is this a limitation of WSL2’s V4L2/NVMM support?
Would using Ubuntu natively be the only way to get NVDEC hardware decoding working, or is there a container-side or WSL2 workaround?
Thank you very much for your help!