Apply Masking before SGIE

Setup:
Ubuntu 20.04 with docker (nvcr.io/nvidia/deepstream:6.3-gc-triton-devel)
DeepStream Version 6.3
Cuda 12.2

issue type: question / feature request

Hi there,
we are building a deepstream application with a PGIE followed by multiple SGIE.
The PGIE outputs bounding box and segmentation masks (MRCNN).
One of the SGIE shall analyze the output of PGIE incorporating the mask such that it should run on the masked image.
The other SGIE shall run only considering bounding box information and ignore segementation mask info.

Question:
How can I apply masking as a pre-processing step for SGIE? Is there such option available yet?

How does the masked image look like?

It depends on what is the input of the first SGIE, please describe the input in details.

Hi,
Thanks for your reply!

We are using python wrapper.
The masked image is the output from PeopleSegNet (PeopleSegNet | NVIDIA NGC). It ouputs a mask of size 28x28.
The seondary is a custom model based on ResNet50, which takes a cropped image as input (B X 3 X 256 X 128 (B C H W)).
We are looking for a filter-option to add to SGIE config, but could not find any yet. Ideally that filter would mask the irrelevant pixels according to the (scaled) mask generated by PeopleSegNet.
Is there another way to perform this task?

Your SGIE is a ResNet50 with (B X 3 X 256 X 128 (B C H W)) input, that means the input is a image with three channels(RGB or BGR, right?).

If the SGIE takes the cropped object image as input, what do you mean by to apply the mask on the image? How will you change the RGB values according to the scaled mask?

Hi Fiona, I’m working on this with Carolin.
SGIE input is indeed an RGB image.
Our idea is to apply the mask from the PGIE on the SGIE input: e.g. set every pixel in the SGIE input image to 0 if the equivalent pixel in the scaled Mask is 0.
We were asking if there are onboard methods to do that.

Assuming there are none, we’re working on doing this either via custom gst-nvinfer or nvdsinfer:
We started doing it via custom nvdsinfer, because we can conveniently use cuda here, and changes to the outBuffer only impact the SGIE, not other elements in the pipeline (if programmed accordingly). I.e. we modify InferPreprocessor::transform() and nvdsinfer_conversion.cu. However, I just noticed that in InferPreprocessor::transform() we don’t have the mask meta data (batch_meta->frame_meta_list…) readily available, bummer. Am I overlooking a way to get the mask data here?

MRCNN is the instance segmentation model NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream (github.com). The output segmentation data is in the " mask_params" in the NvDsObjectMeta. The format is in NVIDIA DeepStream SDK API Reference: _NvOSD_MaskParams Struct Reference | NVIDIA Docs

Yes, we did manage to access the mask via probe, but are struggling to use the mask on the sgie input frames. As in, we don’t want to apply the mask to the “global” image, but to the cropped, transformed etc. images that are, to my understanding, generated for each sgie input “instance”.
I.e., while I don’t think we can, I want to double check if I overlooked a way that allows us to somehow get to the mask_params in nvdsinfer_context_impl.cpp / InferPreprocessor::transform()?

edit: can I access the metadata via *devBuf?

The NvDsObjectMeta::mask_params is only available in gst-nvinfer plugin while InferPreprocessor::transform() is in the nvinfer library. We’d suggest you to implement with nvdspreprocess Gst-nvdspreprocess (Alpha) — DeepStream 6.3 Release documentation, you can design the data structure and path for your convenience.

Cool cool, that looks promising, I’ll have a look at it. Thanks!

After a quick dive: Yup, this is exactly what we were looking for :)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.