I am looking to run a UNet TAO model to run inference on a video. I have been using the DS-TAO-Segmentation app with the config examples (GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream) but I cannot seem to remove the background or get a correct overlay of the masks with the video. I want to be able to ignore the ‘background’ class that exists on the output. In addition, when I tried compiling the app to edit and modify the ds-tao-segmentation app I ran into numerous compiler errors saying i was missing header files. Could you also direct me to a tutorial in what packages, paths, and dependencies i need to have set before compiling?
How did you install the JetPack 5.1 GA and DeepStream 6.2 GA? With the SDKManager? Did you build and run the sample inside the DeepStream docker or directly in the Orin device?
We also were able to re-train a model to not include the specific label we were trying to ignore. When we run this new model on the video though, the “background” still shows as this odd prediction layer. is there anyway to ignore this so we can have the labels only selected on the object in focus? in addition, is there any way to set the transparency of these layers?
There is only one object we are classifying so an instance segmentation model. Unless you can specify the difference for me. In addition, would it be possible to just overlay the prediction layers over the original video?
It is an Instance Segmentation UNET. it has 10 classes, i believe.
I’m confused because there is supposed to be support for deploying models from TAO to deepstream. this does not seem like it is supported as it should be. we would really like some help trying to get the UNET model inferencing in deepstream, and be able to manipulate the mask data (suppress background, draw label values, etc).