Custom UNet segmentation model uses only 2 colors for output map

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**jetson
• DeepStream Version 6.0.1
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.0.1.6
Hello,
I have a segmentation UNET model that I have trained with TAO on VOC datasets 22 classes,
this is my TAO train space:
unet_train_resnet_unet_isbi.txt (3.0 KB)

The TAO evaluation process went good, providing a Segmentation accuracy of 94% on validation data. and inference right.
I have converted the model using the following command for the tlt-converter:
/data/tao-converter -k nvidia_tlt -c isbi_cal.bin -e trt_int8.engine -i nchw -t int8 -p input_1:0,1x3x640x640,4x3x640x640,16x3x640x640 model.int8.etlt

Also, I am using this config file for the ./ds-tao-segmentation:
config_infer.txt (1.8 KB)
I refer to this project:GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream
I am running inference using this command:
./ds-tao-segmentation -c config_infer.txt -i sample_720p.mjpeg -d

For visualization, however, I get a binary segmentation mask instead of a segmentation maps with 22 colors. :


Can you please explain to me why the reasoning is correct under TAO tool and fails under deepstream?
How can I get the correct output under deepstream?
Looking forward to your reply thank you!

please add logs to check if there will be multiple classes, please refer to Gst-nvsegvisual — DeepStream 6.1.1 Release documentation, nvsegvisual will use different colors if there are different classes.

Thanks for your reply, I’m not very familiar with deepstream, can you be more specific?
how can get logs?
./ds-tao-segmentation -c config_infer.txt -i sample_720p.mjpeg -d

Any further update? Is this still an issue to support? Thanks

thanks your reply ,how to get logs

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

please refer to osd_sink_pad_buffer_probe in deepstream-test1, you can get classes from obj_meta->class_id.

There are 22 class, but there is only one. I replace my model with the official model at /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-segmentation-test. The number of classes is consistent with the official model

tao-export_log.txt (32.2 KB)
this is the tao export log file,

there are two models in deepstream-segmentation-test, which model are you testing? please refer to readme of deepstream-segmentation-test,
for semantic segmentation, it needs semantic model which can get 4 classes map,
including backgroud, car, person, bicycle.
“nvsegvidsual” plugin chooses 4 different colors for them and to display.

I mean that I can get correct results with the official deepstream-segment-test semantic segmentation model and the number of classes is 4, but the number of classes in my own data training model (tao training and class label is 22) is inconsistent. The actual deepstream operation found that the class is 1;
The inference result in tao is correct, but the deepstream result is wrong

if segmentation_output.classes is 1, it should be preprocess and model 's issue, please make sure label file is right, please refer to link to convert model, PeopleSemSegnet | NVIDIA NGC
if still not work, can you provide the TAO model and label file?

Thanks for your reply. The. tlt model is too large to upload
unet_train_resnet_unet_isbi.txt (3.0 KB)
labels.txt (135 Bytes)

please give a download link.

model.fp16.etlt (59.4 MB)

please also provide that isbi_cal.bin.

this model is fp16 acc, don’t have *.bin

after testing in deepstream, seemed the model have one output class, need to verify the model has 22 output classes.
here is the logs:
0:04:14.849637060 42732 0xaaaad71910f0 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: serialize cuda engine to file: /home/nvidia/rec/model.fp16.etlt_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1:0 3x640x640
1 OUTPUT kINT32 argmax_1 640x640x1
pgie_peopleSemSegVanillaUnet_tao_config.txt (2.5 KB)

This is the question I want to ask. I use the TAO training model and use the TAO convert tool to convert it into an engine file for infer and I can get good results. but only one class is found when I deploy it to the deep stream.
The following figure shows the inference result of using the. engine model file on tao:


used tao-converter to generate engine, and used trtexec to test, the output classes still is 1, here is the test logs:
[09/22/2022-09:26:18] [I] Created output binding for argmax_1 with dimensions 1x640x640x1,
here is the command:
tao-converter -k nvidia_tlt -e svae.engine -i nchw -t fp16 -p input_1:0,1x3x640x640,4x3x640x640,16x3x640x640 model.fp16.etlt
trtexec --loadEngine=saved.engine --fp16

You are right, but I use TAO convert to verify that the deployment is successful。


Is your official deep stream segment test model trained with TAO? It seems to be trained with caffe. Have you tested the segmentation model trained with TAO and successfully deployed it to deep stream?