Ds-tao-segmentation app issue

Please provide complete information as applicable to your setup.

• Hardware Platform: JETSON ORIN
• DeepStream Version: 6.2
• JetPack Version: 5.1-b147
• TensorRT Version: 5.1
• Issue Type: question

I am looking to run a UNet TAO model to run inference on a video. I have been using the DS-TAO-Segmentation app with the config examples (GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream) but I cannot seem to remove the background or get a correct overlay of the masks with the video. I want to be able to ignore the ‘background’ class that exists on the output. In addition, when I tried compiling the app to edit and modify the ds-tao-segmentation app I ran into numerous compiler errors saying i was missing header files. Could you also direct me to a tutorial in what packages, paths, and dependencies i need to have set before compiling?

Thanks

How did you install the JetPack 5.1 GA and DeepStream 6.2 GA? With the SDKManager? Did you build and run the sample inside the DeepStream docker or directly in the Orin device?

Have you install the dependencies Quickstart Guide — DeepStream 6.2 Release documentation after the DeepStream 6.2 GA is installed?

The Jetson set up instruction is in Quickstart Guide — DeepStream 6.2 Release documentation

I’m able to compile now thanks.

We also were able to re-train a model to not include the specific label we were trying to ignore. When we run this new model on the video though, the “background” still shows as this odd prediction layer. is there anyway to ignore this so we can have the labels only selected on the object in focus? in addition, is there any way to set the transparency of these layers?

Currently only instance segmentation model overlay on video is supported. Is your model a segmentation model or an instance segmentation model?

There is only one object we are classifying so an instance segmentation model. Unless you can specify the difference for me. In addition, would it be possible to just overlay the prediction layers over the original video?

It is an Instance Segmentation UNET. it has 10 classes, i believe.

I’m confused because there is supposed to be support for deploying models from TAO to deepstream. this does not seem like it is supported as it should be. we would really like some help trying to get the UNET model inferencing in deepstream, and be able to manipulate the mask data (suppress background, draw label values, etc).

It is supported by deepstream-app too. Please refer to deepstream_reference_apps/deepstream_app_tao_configs/deepstream_app_source1_segmentation.txt at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub

Hi Fiona,

The reference app you referenced supports MaskRCNN, not UNET. We have a 10 class UNET model trained.

Hi @derek.r.skilling , we have a instance segmetation demo in deepstream_tao_apps.

export SHOW_MASK=1; ./apps/tao_detection/ds-tao-detection -c configs/peopleSegNet_tao/pgie_peopleSegNet_tao_config.txt -i file:///$DS_SRC_PATH/samples/streams/sample_720p.mp4
or
export SHOW_MASK=1; ./apps/tao_detection/ds-tao-detection configs/app/ins_seg_app_peopleSegNet.yml

You can see if it can meet your needs.

You can also see if the mask_params in NvDsObjectMeta structure can meet your needs.

For segmentation model such as UNET, DeepStream does not support overlay the NvDsInferSegmentationMeta on the video now.

We will consider this requirement.