DeepStream 6.3 on Windows Ubuntu WSL Docker Container Run Error Message

Please provide complete information as applicable to your setup.

• Hardware Platform (Windows 10 - Ubuntu 20.04.5 LTS WSL)
• DeepStream Version (6.3)
• TensorRT Version (8.5.3)
• NVIDIA GPU Driver Version (NVIDIA GeForce RTX 3090)

• Issue Type( questions, new requirements, bugs)/How to reproduce the issue ?
I am currently following this repo to run DeepStream: DeepStream-Yolo/docs/RTDETR_Ultralytics.md at master · marcoslucianops/DeepStream-Yolo · GitHub.

I first pulled a docker container using docker run --gpus all -it nvcr.io/nvidia/deepstream:6.3-gc-triton-devel and followed all the steps in the repo correctly. I have also verified that all the files are present such as the rt-detr.onnx and labels.txt.

I saw a similar issue on the forum “Failed to create 'primary_gie'” and tried following the solution but i think our problem is different.

The error message is as shown:

> root@b170f707475a:/opt/nvidia/deepstream/deepstream-6.3/DeepStream-Yolo# deepstream-app -c deepstream_app_config.txt
> ** ERROR: <create_primary_gie_bin:129>: Failed to create 'primary_gie'
> ** ERROR: <create_primary_gie_bin:193>: create_primary_gie_bin failed
> ** ERROR: <create_pipeline:1576>: create_pipeline failed
> ** ERROR: <main:697>: Failed to create pipeline
> Quitting
> nvstreammux: Successfully handled EOS for source_id=0
> App run failed

•Other details:

in deepstream_app_config.txt:

enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_rtdetr.txt

in config_infer_primary_rtdetr.txt:

gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=rtdetr-l.onnx
model-engine-file=model_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=0
#workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

WSL is not officially supported. Can you retry it in native Linux?

Ok, I will. You may close this question as I might take a while to find an Ubuntu Native Machine. Thanks!