Deepstream Segmentation App produces same mask for all images

I am using deepstream_python_apps/apps/deepstream-segmentation at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub app . I trained an UNET with resnet18 model backbone and exported it using tao export. When I run the app on an image , it gives a mask output and the output mask generated is same for any input image. This is when segmentation-threshold is set to 0. When I increase the segmentation threshold up a bit , even 0.1or more I get a blank image .
The following is the output:


In the terminal it says
"
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1:0 1x128x128
1 OUTPUT kINT32 argmax_1 128x128x1
"

But my model definitely has more than 2 layers.
I only had the etlt file , I let the deepstream app generate the engine file .
My modified Config File :

[property]
gpu-id=0
offsets=127.5
net-scale-factor=0.00784313725490196
maintain-aspect-ratio=0
model-color-format=2
labelfile-path=labels.txt
tlt-encoded-model=test.etlt
tlt-model-key=nvidia_tlt
model-engine-file=test.etlt_b1_gpu0_fp32.engine
infer-dims=1;128;128
uff-input-order=0
uff-input-blob-name=input_1
batch-size=1

network-mode=0
num-detected-classes=2
interval=0
gie-unique-id=1
#network-type=100
network-type=2
output-blob-names=argmax_1
output-tensor-meta=1
segmentation-threshold=0.1
segmentation-output-order=1
#parse-bbox-func-name=NvDsInferParseCustomSSD
#custom-lib-path=nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
pre-cluster-threshold=0.5
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

The mask it produces :

Hardware Platform : Xavier
Deepstream Version : 6.2
Jetpack Version : NVIDIA Jetson Xavier NX Developer Kit - Jetpack 5.1.1 [L4T 35.3.1]
TensorRT version : Tensorrt version shown in the images below
image

Do you mean you use the backbone from TAO Pretrained Semantic Segmentation | NVIDIA NGC to train a new segmentation model and apply the model with DeepStream? If so, please refer to NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream (github.com)

If not so, please make sure what kind of segmentation model your model is.

Hi , it is a Unet model with resnet 18 architecture , it is one of the models pretrained models provided by nvidia tao , which ai trained for a custom dataset , and the problem as mentioned is it is generating the same masks

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Please refer to NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream (github.com). The deepstream_python_apps/apps/deepstream-segmentation at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub just map the mask as RGB. If you model do not output correct result, please check the pre-process parameters for your model.

And the TAO unet model is segmentation model, the “segmentation-threshold” is no use with segmentation model. It is not reasonable that the “segmentation-threshold” changes your output. Please check your configuration and model.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.