Segmentation error with yolov11 in deepstream

I’m using a Python application developed in a deepstream pipeline that is running the YOLOv11 targeting model in its Nano version.

However, I’m encountering problems with the output from the model processing. The generated output is a mask that is quite inaccurate compared to a mask generated by the same model outside of the deepstream. As in the example below.

Mask generated by the deepstream

Mask generated outside the deepstream:

```
The configuration files followed the recommendations of the repository.
```

The mask generated by the deepstream is a 160x160 float array of values ​​obtained using `obj_meta.mask_params.get_mask_array().` It was resized to the bounding box of the detected object and converted into a mask the same size as the original image. The original mask is this

Yolo Nano Onnx mode (9.5 MB)

**Hardware Platform (Jetson / GPU)** : NVIDIA Jetson Orin NX
**• DeepStream Version :** deepstream:7.1-triton
**• JetPack Version (valid for Jetson only) : 6.1**
**• TensorRT Version :** 10.3.0
**• NVIDIA GPU Driver Version (valid for GPU only) : 12.6**

Although this sample was not provided by the official, I still tried to test it.

How do you test this ONNX? I’m using the model you provided and this sample for testing, but I can’t get the correct bounding box and instance mask.

diff --git a/config_infer_primary_yolo11_seg.txt b/config_infer_primary_yolo11_seg.txt
index b0d28ca..24767bd 100644
--- a/config_infer_primary_yolo11_seg.txt
+++ b/config_infer_primary_yolo11_seg.txt
@@ -2,8 +2,8 @@
 gpu-id=0
 net-scale-factor=0.0039215697906911373
 model-color-format=0
-onnx-file=yolo11s-seg.onnx
-model-engine-file=yolo11s-seg.onnx_b1_gpu0_fp32.engine
+onnx-file=balao_v2.onnx
+model-engine-file=balao_v2.onnx_b1_gpu0_fp32.engine
 #int8-calib-file=calib.table
 labelfile-path=labels.txt
 batch-size=1
@@ -20,6 +20,7 @@ scaling-filter=1
 scaling-compute-hw=0
 force-implicit-batch-dim=0
 #workspace-size=2000
+infer-dims=3;640;640
 parse-bbox-instance-mask-func-name=NvDsInferParseYoloSeg
 custom-lib-path=nvdsinfer_custom_impl_Yolo_seg/libnvdsinfer_custom_impl_Yolo_seg.so
 output-instance-mask=1
diff --git a/deepstream_app_config.txt b/deepstream_app_config.txt
index fd9300d..c8fbbce 100644
--- a/deepstream_app_config.txt
+++ b/deepstream_app_config.txt
@@ -57,7 +57,7 @@ enable=1
 gpu-id=0
 gie-unique-id=1
 nvbuf-memory-type=0
-config-file=config_infer_primary_yoloV8_seg.txt
+config-file=config_infer_primary_yolo11_seg.txt
 
 [tests]
 file-loop=0
deepstream-app -c deepstream_app_config.txt 

The output like this:

I trained the provided model using the Ultralytics package and generated the .pt model.
This .pt model was tested outside DeepStream, and the segmentation predictions were working correctly. However, I did not test the ONNX model separately after the conversion.

I have one question regarding your test:

  • Did you run the ONNX model that I attached in the post, or did you test a different model exported on your side using the repository referenced in the post?

The ONNX model shared in the thread is a segmentation model trained for two classes. I am also attaching a short video segment that can be used to test the segmentation behavior.

Although the repository used for exporting the model is not official, I noticed that it is commonly recommended in related forum posts.
Could you please confirm:

  • Is there an official NVIDIA-recommended repository or workflow to export a .pt segmentation model to ONNX, generate the required custom parser .so, and prepare the DeepStream inference config (.txt)?
  • Is there any recommended model architecture or training library for training a segmentation model from scratch that is known to work reliably with DeepStream?

Any guidance on the officially supported or best-practice pipeline would be very helpful.

video for model testing (5.4 MB)

Thank you for your support.

I’m using the ONNX model you provided. If the .pt format model is fine, then the problem might be with the export process. Please first ensure that the ONNX model can run correctly in the sample code.
I believe that if the ONNX model is exported correctly, YOLO instance segmentation can work with DeepStream; some forum users have attempted this.

Refer to this project, however, we did not provide a sample for YOLO model instance segmentation.

We have provided instance segmentation reference sample code in another project.

If you’d like to try this model, we provide the TAO training tool.